var/home/core/zuul-output/0000755000175000017500000000000015157107411014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015157112762015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000153376115157112614020271 0ustar corecoreikubelet.log]ioH2y cg2Ď7$AlIb3lR~-碬97!z V[c9Α9ՙTW#kڰ Wǂ7?.y.MͪZeۙ,&J^u[6bm^ec-fVΫg/Klȵ-rxRւY͒[s梀+Н~'C5Z6M70:lV,_R@wvV]? \J%b{Vj֟z8%_ - xyO(kk&QZ̚M ׼V{_ˢv!J;GӶknDq@~J6|@ͶXv|=a݉Ns7 V{&g 0Sg>|5:}{tlUF6񷞛 /X߷tՓ<\uCV#wfA$ FyŬPr~Nue8 [7 E_ ql%)  蟰#J*=eC{sx_o)}Y\N\tc[٢q*6Hۛt)[@#?RWMrJ:{S9kaxU(vN>Lx.p͏0ٛeK\ ۯj>$27i'a^:w'm3z9ILS\bMqILbQJ?YN>d}]H6F/%l )zELAK77?S5ͫZM+:/{XNJ)t?vhc#FVK'Y&AC^ QqS[qYhFٲH$HdF`blԛ@>c =qIi~Z:m5EBrԾ=) yT+^_(zu+ V"e/XJ-$cBc=>HT?c450bQߧ )҈Ȋ.~FӸo'*XZK~ }4oN?f՚L.mnwNA."޽ #`PW:*:(րlN)ܟ^SeDR r?3;Y3Q~RDCQM̍NM7/1:m:\ 4WAY jғVK N#˓IEbߠOF6ϛofIT @I4:wxwSE4I"/ˬZ4%z梻r-G%a>},C\evDJj~=>E飫 Ի[Gpݱr%> ?E'8u:Lg-J:*<]FؐHX,,+LUI`lU$|i+/Xsy6%6hݕ }u`MZJQGÏK}wa:~4X$j)͆CH$2=RG*~ܺA"R !9}†y6Z&$*g$ZCll)vBBd蹔QsY |1) O83jfʸH&'##\`N>dx]J\bЈɧ+4h\J$9[;cJ>[oڛdc<0 QeA`آȮK0F~Z˥dvxLٻRRX]7uFq8TƐR2 wGz'dp,g#ha < ~G8fϬDg:f0RDQxl,chl'a;!t<.%d׫!tTǥ'H%|{F2\qr :1\Dl=fTNH>p Ht GG(L%SH6~a'[nBR.C"pBQ0@%-s)vGtXyzrەH*] Z'd:$繐+*d-B!(hՑe۬3&|m؜c )Ϗ4m_7qKYޭdl6d!n'|ґ]fΪ z; ]ٓh0ϾP. h x+GL`_Iq*o, !+Rg6m%L'1wuê!Hk'S֬KD*0xe'v>%ěl{$63 3^7X Y}X٦rau-R:[FiXs :̈́uvaP.PʆO?Ie`;ܡ.-BXgLm&La"V V+ddMkv^ҵw+&Jj}G4 NA#}3'R$Jv%TȅbzOUvxA-fˇY-2Q2ھr _xW"vB;>N 9vQcͬ8kU6-}7R!%!v&Z^NE LE!g0qUs̗Ρ5+@ X@hk+CLZ NDQKٞ2^x. |SQoyABBMKm z3 #S )ߛeT?a8Y/|7~cɐx˓KXRt$!87piJUi-#u`YuC{Wl)2ZϱY%LYIvsWA8\!0iY;' weXN &6:=j|'OI*(.eTP`:rex *F@dbhld/h: kqtU1Mn؄JTYZn"àg.AR w/HpGd[xHG;᷍6{Ń뎗1  XEVZ^^vVmA ѺJ;\6#4H|+V9WU 9VC-kgJQYcˊ4 DF!6ȶKi[BX W1Kj8rnMSy)ADuqiΆ{گCjc&nL\/x- Z|r„.mW+/{l0{{"7m;R%z ZkfI>O@06Sш Omh3E .xUQ,{ aւ3m- ebw>lY w=A>;>؄(hӚ:?%N $O"P1n$&%A{CM_@+"I}7y*{zH"!2."-X:U iHȈCNa&yCD{ TcqVjW~s!~4 >`>HS7= `` U+fD;io~y:-w -~uHFgky+LW$|Ф[E%-^@-@#/O-;/pJxnzO[xYAmvuE?0 P` x[ o8#i BwlŀS8YT:2^c*7 |⑔1x$?@&c[/HqFtģt .@)0iљFr?f:?IA&~izx#n81B'yc݈fܪJYQZ̚>!qB㛌Xx %d~%3 ƉnT[3uA#g *~虈heKxrP`Z݅0 Ӑ ;A D4F@5BJFxA!4A!qO8Ay! “y.oZ>.pAjphNsqAXAx"n| /h8J`܁5Lap(ԛ Bs(|;|8p  *r@A8C(B&y>cCEh! D0űٕP5<$G{`g{"ﹼt>X/(7b# hKxr<"@B,!" + `r4S s@P"to8QyOjgoa F ́@ p~Cl@@AwDHY1P?e_݁c㛱񊇞8:,=&[ObDӉ4;֞~FOt7iL ;@5\? K25ޔqw.5:qԞDŷcV'xYΑm{e| ãgq3r)6}1L3G3àLzJ0jOvR^5;#{8 t~_sdzVNHaDCHH'ٓu9YpŇ *sh tX'OD tHF=o\@SYD鑱h(R,9A+x];Jc/ 1XPhA`.A{CoX>4Bkxk,n{|5ֶ=Gګ~Xrk?E42ꩢrPWOiH`F>#q=,{5OfkzOJkD~@w|(x` BW O?Ja+6)/N9eXm,XnJ`X <xoʋ ,>jG8%k}EYDY\bʖ`Q]"eHm_,wЧBk6q ðR[`jmu 8ֶ j;~ML,bٴ0)w܄Z]utrz|ZZXZƠu/lec{n=P``[c mtWE8:\HCPrxrެG ~?`~kx霖tbf671/mta=D4r"﫦*o4_t20"Ӿj v]Ь0\7J(L%{qFO"!ԓV](`%u UL՝ɼ)VwN 0Sikq%fy+2Vvm}L<hw0h[]:4`n7q_qqAZ j=NP끂ZQ.uj=RG֪jv͊ݩfRW.kww]VqAP+:y]Auw}uw (~WPA'@A ;.h8A  ( w.( ;.h8A !Z.Zq.ֆ`,a2jS^\猘cp*{|M o<E^\{1V$T&(ym:[^%K>xڹiVInʘ!I+: ǽUQV,.4~:K}~mŌG}Gxdglk*+&d&q#IΣpG~?b` .ƞq&HJ%eOʬW*Oȍ-=ֽi}ۓsmߍh޺F y?ߞuU_Dc]T.NJMqxqIƟ%(Ovc1W3cUfp&4ez/uuP~Xic"{v5|5m$eK1e<UIgd6s\/>qI,fXo_ :Sf܂9ZTջ8ƳIULi 2ePs F9E<&S'% 3NL$?Ru[zNc28}Y mPt߫ [Þ!HjГSmiU SAv:9)We6<ۀF$OTT(,^g F0 fXtsrIP*3yC/rRs|h1KUxd9=շ40X E o@2:Ɣ[I Ld\]<6arnΧD9(J:X0k.es)1Иt`\Y-I&ij[%H1^r%yY>ӝAW|jŞn1l?hTdStڏ=u"4[ZH .XȪҙ/ [`hJ R#Eڊd.9$}F f!9LQzA@wNfJt'wEn[vvj>|X[jHlTHk8co˸ܱm?fq=4pnumvje\IVЇV_0:Ek+C9LI ! 2Tlv42b,~Wê0Yr%4d3Z<)gk+rUɆ|Cn>]ܽ.#/60oFZn_Q)"ڻ,刬v=H?HӲT")֑؂*2=8a'-LqL ׊Ojaz"=BR#]&дgr>NaNtak[ Deu^M,[XRp]otj+W ]^A[۵g$?n-I5VVYxaˉiq+<1mJCB@Qo:wo^  9ЫvVi,9p YBiV 3a{FQ7۹Q߁ʳVGv8gP,,@\7_ь)jXCت]yvR t^9k4k-Ob^>l-edٜ0kFvT񤅶-wߚ(JXح>o8}_NQpd z!!*ҩyreke \ȝbzkYz(Y6"Xީ䆂5jDUĤgv"a[2%F+Kn̈́7MXli*@TfQM["la x(|dc˘*u 2#]ῖX_*—gH~(E/dy%ۊk?[\m {Bj\z;iB)wNDz;Rwa&T^/Z~[@`g1 'b9Ǖ?_=X_Gߚn?~5glK>=\&fم~wk-RE6:l?6 Ͽiov8_Joœ|u:U Ўydcw|w/I^ ٷm|˕8,׽zd]t?>P?`bpg8K|1'-:S\BEN!wpד7|vnsi|`^'?_j):I۽RJ@n79AϰcCt^ Ѧ↏n cj93 ?1˅{V_ nj˕\bQ *굨"pu:C~ꞧbEKK8t;[WwOOWiv д=ߏe (e keBOx_JB+kT|Oc~0U﷯x['!ߟIgUxőT7ᩜ,e6U1+@s#m4_<]𲆹ߨ;GUAvb *2k3[7l˽é{7\83{6'tS˩a񅋰O='_70>[`Jq}'طq6 ~{x2j!J)QqN lH8ZZ\%os9pV}}`EZl'Vp2MV" c1tFZ=gB#ŗs xCKTxH#;} r^wtG*Gn BP,m/R$s"Z[m:8TɡC/ 1*[K_&EL<]pcqIiSk񕕱}ՠKWHq_"- v3zv2pBB!ĜuESѦI9)env,n髍`DQj{_.S`x$͗íF7}i`I89qϞC( :i|偠:(b`;Mev.wd 5x2Ҍ  8&dY(g5MP$5 jb,j, £ zN0E ~޺_~vѬU`ie 1-"=b$@?|<3} ad:3铠2h!5~S.A{X_.۲۬R,g9XY|z^rF ){t.XRan/@76.S > *E"cUO"X͍!.5ʥ ߂_DxQĘAҹ=x8:.J+lVF=2Xڄkb,JY`z}^^&@>@یIgv189³JdbzYa%hNA}b%M(! 4tE=Ec!GHx@#KT\b-AvAptUlU<F{|X؎Yp^Sz;@<} Sw`*&DE"e"կIt;;т7C }_G݀bt vAMM`,aLzT x$uŅ^g(Ow{ CN8BTTjHo zHz%WoLBmq2f`[Ȭ"1&Ce2{{g/5oV@/*~w{l_@sL*Z N/8/Rx,z;r0 ;Rht 2K/fuƠBY^rߣ!UxRCܧ\u۪u<-[Kc'ߎ_]kXHBU :R%PkZƻ.O5hK/&hYeVJIu:n:HZ*:_wԤPI/h~!Mw-Rb0v%VH3t˔=Ѣˉ E1錄(dfSS\!|^1f 8qB>vQܱW [E_QĻ.8rDyrV?v; ,‰Q&N|~oz.8<."RvKER7旝zK(mdz OW[7h=_ ӫ.P0C^]p-l |bSL49d9Q> & JbU)HYAkĖʣ!TJEL^7؊x$HtAՀ98+^RDbc wN7;nz̚ѴmB=^j=,Z`hąCuh4S$'⍿kG* 3e9+Y6\(1NnszpDE6n}.ob/c{EfͲެrx=d.oyRNJΈH)^npEj#Ƀ"R… LcI#Jш{r )@^"/]zn=ݮD)s\4 # mIuZqDUK3hJSx 3x%rc ,欵[YClfH`zt&MU;ri:دEv #tvwQ\Y54^5Y港ϥH:? ~[j̆Q,1Q'ƅym-=/Y "+ƅK:ہ?_oG/*+9)}4CӺwiah6ZKku9ݸpsa<$ɉI }v\lUiFtpm();"/6Bg.qcn\Nѧ.8֏M5y0%c K> RG !&)J./n4CHe\m)p-"LVI>䣣3[]Un`2UƘI EgfL?XB5X<5tectθE*Į7Wl3~K&G B0 5MSX2zg*+meN ϕr:o"i@ 2KtJf| .%EHg D 4$(g2l]kkz~owaObje󏇧Q <.+P*r( Hs ~z \ZmدuBMRSؼ[5jn/]¨ևlotf(hE9YU04»} ۶Eߚ<6s)cn`mS-1VRF1C3/0<G0pLAA hVί!; QЎIqk:eUDwjw`Wt,$Z]A"z^u-%# :M} Vph.(&^$@,Ŕ/%"{%2]g}]NgvdM!-RBjptq/qsܣ0Kz?IܐY uF,gkG;n<:D3B+iCH@:Y u ۪$au` l)טi8- |^Wv6c `^:l7Z s/,&uxΑ΍CVj1iIZ@qlaP/Y}5vSsb h@j1L",`8Y8QQJ$006JKW8 @IHWh((,}~VȬ3fLuC,U]h~buSR D"#J Psat2C3w'o2;kS[%؋q]٨٣O^ЦDhX<`[v;KwkfGUvtT..M'f`r ij3]d zX48?hykЀXpV 1(|.Nq 96#YILd56kB lf:y)-wNpTڱ[:p uކgf3p#~4!Si!)I#Gv2pҜ IqɏɿLfIvI@FqmLIu wSیJ:@~l㾝c9) ݤZ@i%c$tx7Qדl3j6fsO ;E{`/ί h{pQu9S s_;p3s xApq *6NS=x1޷/_l2OC EcvX$e XA ߑ|Zo+O.}৮{ܐ[]y?"Q|^jB<&JMCLi80w=\? nra'^fQ0y{-3G[qExc{MƋt]7aR{p^i 1ꑖ _8JL0*i\،F#iP_M*{*d EVJUmB+S!.8'=3/i(2i QEF^L5n(rp7# ui. I#iHH~-7 F&Z*z5 }2Ru.\7ÀҜDrub%y1LS/Kơ>`UL}dpđBLJB3a_@x5tԻIC/.An*LvNr ;p1dujB4?<nLo\Bd>a.M|m' QMя@<*}L/$/fFR~N(ׄ>R/p=ji d9\S}drB!\{N]trp>gg$}q;Uʓtr RvrmRע/q/y~p-L}w^YB =V["dڀ#/pϟ{`Lv$7 yP ->+,6ֽ,F*8y; еB#k.tvN}=,W{f,6lw/tp.5!DBBLb IT b͘:]ـm{Qoqqbjp̅}'PTQH HDH 7UgԷiԻH>Ly&l-9>wm#;mp\䐹or\<7 䖛|/ۣ2o#KRU7+~Lt(c@h91EFÛ4[MCC8 QyAr shOu5HY9*)w_cs<y9 /S/:^>;>%íWt吇mו}ߍxͰump6CV ˰+ Ṯ SޡtVUWHsB?}ON1UO֒kxh<ʡF%o5>sTo7xΌ[M_Зpd.8B_.g ,t'xUv;9wW<<6<]ov;' 0.ܠK-uyx[Ww|`O8 .=۱WqpÉr>r#06ϟa#/ѷPѡFRǗPZOT9& '-pz~DBߛv+*XX F+zRI%uw_Hf-,:<,+ >~~|;3^(;ˆIh~y / 'p{!_ys>kHFs:+IqJ\^r-LYF& !TTսɛ\Y_/?8oq60׆xo~C5<]ܹ!xg_Hk+$*DC߽l *GU(h-LġM_uW#Vq[~!ȯ (R1HW`+CdS ʈHrdnT"6q}mf 䗩j,ooJUe#';UZx7Wv dlxʝb`_ٰ,YUd{dҫ=%{V'Ud"C)(D c@dc"$ )LQ( b ՟LMh25}xPؒ-R )$J:@3a,KfCHbIcM86\+M,V%IFӇ?XjMu:k=j#KN{FΑ+{FUDFNլ8B #o#Tb)R V` Yqzljó9_}#,WeZX|zoW3pv~?L^? < `Sj'0[w\NNOWX7E yifij/Oz8|Ens5m(6i jsmRuaZNu,cr7r&R  HhP TD Qܫhn AiT1\scmqZ@64էLRɗB1,J؇P/޼z&. ƺ|Kw <%JI܋ɲݸķ’(t }Y0eI؉`ז/vvs( G}ruZMpp9n pe%fE.Gp!UR_& / UY݀}d^-& ~h .x˒.T]ۋ.T ЛuYmn$7f9A Yu>*gl;pl >X4RQe*>ՠX;bYhfS3Tw-%zic>WqjC,!慫DuJutJ2l^ܿ=wIIԑ*$=+-#8+r\iRxyO(=Av2n AmipL/]hgyO@%0 SN֬Z!\#{M ìЛO"OF'Gu1xDJ W>5Wgh*Fo͙ܞy*"?2i-SRUL\+0QTŘ\$232WB"!NJth]ԚA`rJ d~Xt<FК xpB1#yC (NVat1TQdmK2f lBμZ((H@7Fp`&CbC=t:_pr#Iz a`H l>؊qoeM#h*ŀ3͓3U[1yқ<&RDL :daXF\ylv&.w zP9 lӰ|8}nA\ Z ۉ-^IJk0B fX*UZQ$>IY !O29nAsz{Ka+)P#** 5R}R1$S;/"a0 eId?\uBiu+cX &RX|:Y'Z+kd hWsQyy! G+,(Rŭ̸A:ɓ b5ϿJs} lW9E{mQ!yWɰ\!*Nq@@1T6@t*we20Jd8IlT\Z[t9&IR'" E'>OB$B)!P8Hyd̝S@AJ$B'xH"dbHDgub; ӀӐ8 CqUzNʄr"jX 3 #OmX#tj`j؎Kʄ1,Dr(x'K0!f;nefKk$h jqWZi!~T)UE "ҎzS<3I:rY4<`)&F%&4eMeMYlߎx`8)@8Kdi/'';)@AMXFFՑA)TZ%z+atUd"f^ pDb h=*HP[~L`:B}8=8`KF+`E/*[3AVH2V5{7ZCܙ[xR'#T2}dlj҅N;QFwB=,R֨+e-؛=)clH2`me5YiDVR/S'/EY1 4c'u&5Mte`r(X58Y)luĄhxL7B* &c0rPK8j&v|5 64'nxWʽrԆ4/{D~cOm[OrtTw}WKx;#}  >W}0DZjAU.8&Ĩk( RJSP:7Ys&G /~R1*q͌R"ϸ8Z8D)^~HXP OmE0Lk`.oeF{2sv> $iYdF|w1/ꔳ2%t1\` \ w)߇ )ˈ+]rȻrëU pk&$mBMH[BpPRLd;+fqO\3ϋr-Gk56xˑ+c@|j92ђd.v~' } m^ I#Ȣv"yL>kA UVxŪހk)"RI՗-,LT{ [5,ZƔ=ehqMLcR!&$hBBK.8| ۋa/J_:: %U\xϢϳgz?O FYVUBSkQQlO9*y~l !\R#\3pqb,N@4,sL7ˆOݣ]59U~?BTtY3\O vxW+,Q^JBj|wSu=Λ4p|X<2|5Qx68oM&ӣ \~ݣBG8Eps֏5Rqnv[kÀ5b0vKA'/:) G?}LWcJxnZj^XےI+8#kdrjd` ōб2 WZ[ҀpbSV4UQS`jcX>BYf .72;p 7ѺTb )ihB![:>{r˽2"M$NNzwVtrc-Zi)fLeL{"E3^Xƾ 9wkZ*R ۓ3{mWZ48 B9 WBQ^%lE^!ypG^}u-)[ȷpE@ےEnt_`ZiS-wm=fUJ$D{.,熑pymf _ݶ l.?5ie!ྯ)+zނ 5s-G ?M&r{0[X ,.VrjP+~)k˰mFTCO7 Bgb Z& ȯuN*RK fGo﮲>WY9w~6R?ouwǦZFkR^֊|)i7"_}]~=Y -d)W]Zl|U : Lj L Xr+5lم]YqO'*5(ª!E05SEz[) -q7h5g;o o$A˭/ xkETHltgݻX0d)tyD7f>w0>. ҫ^`qazj͇g(t:B矈OtvdaN<34 K>JlFcZGx9r0|鲦Q\B3QPck+eBл 63l;F R| l͆&BTRX#~hp |h [vknFjz 'K&v^haɀp!Z}!R0_Qnm@ز u$͆:,.8'3^C.Wd|X/(?+-FNZCvH1W]5)B=}wAi#pO̟MpÃ!KmKN±|{1-ps?Xzfƿz։Р1rZ"`b;i(2Sn:*[tsuGN48(DvIA*Qw9َ;OY*h #ݿGM{Xy)Fu8n^$ &՘*GoWŸ^域YIj106)AWy$ ?5uџE6ѿw?_2~}JN)"iv]?]ֿN ۘ'Mw8ytQ:?z7o N7k<ȅ_LcbX&jBqSOP%4 ~`*'~o792; i\^j(o͜nXGxiРkDU[ǛhUoBMGw ^Q%?[̭(Rtv *WC&-_"p9פg" gTZSO:f=0fZ7 Rlo$!oFcRd~1jߞO¬NiK>JX''n]^zŁmrz&;zDqc2 jѯ~_J̷DR5qTD50 ,AW>E(p.UdP 9k:{[1P1 y}iԽqCA7K{ߊH&HYi++z q\FI2Zx {#R7"LWG c c.>ӛZ`RdQ&߂0:F*bo|"G`rR_bfZFK]\8BY?41i.̪ iD [HcAG .; iP<+۞ʧA 8kj/E/gX+9> 6*f$V2HLaŴ!OAp,Fq #'w^Pd5ZXUHɅ`i)-U *.O9pu=)ǯ\=&iuww.m$I*NHΌȌX/`,җzQ[Iv}h&Y*If2ݒ`U1{zRys57BzJGMhxZE9߿"E(R"ŜD*\_SuVTMe8|RM<^ ԞW@-n)M2%򑩏eF-31gTQH5f U%%B-EѦE$RPOK,noq{OE'@894؃e.-s ΥBaH $R]^?.iMO,6e1Їu#|D/J/"M2U^3Rq{OeD:GH$@q{{n/$mX(}|Gӗrt^"#n p>2UP==w8Y"D( $9uoq{{bnNlH# ^HrRy3-n,n<&PLr잚}ou}n{ղpam/?t. y|k7P >Եs`ZښY몧 }wO}l\ݘtW~ 侼ᄹ/8zݢU}xs<\}j8~oO7Dz+U*.-9^^i;uKȊmS9C{Cř6a8Tqz5B} }S(y`j?] ^-hMɷ=D2|-^nuA9ilhԾ[u$P=w]\Oa婍6n;Z90V\ = ,mH\VHc&id+ !pꞛu6 Xe 2e^.zApu*C 65PUCPl Ʀ|#y E, ڊbS<#,}-Jʴl.mj_S|n4~*{8RDl}'–wnojgcWC д8{;B -\1}LQ)BI,:se3ؗvs ZjMڦhFR_g N+`V>4U\ ,Nj5FxvbMR$ MɇC\&m&YU[I/}W~y G[&%ZlCJ$# bh;P\U&t=WԴx6nԃKlo=`ɳgh<ˬyll@,cA2a$H9ea2w؄ !nР"-Sldc>lLѵU);\\QT6h<"B"'&l %iL6PŸe |+ftuTFu.)_8m$K˒+K CsS2!D8) 6!r!>,Kd'dA*L0BPϐGsnK057T}Va:{j״54ykgskycG6p{0 i#@11<֮@l^%n53L,_!"dL|`ČtE0sUof)YeQv;χS6zc<݈$ a8.[+qB}ӫ+}l96{CEtIѴ|%6ͅ/X lqTLb|N3 ukݵ"UZZCY0#_oe:A@#q"C2O$^I&]=ʦ$LHy\{keoF\$vLɆ Kj\Ur )i+`n?D =%xMqdCv13F"ޒ=rpM0O1 8SF% @bMαRx(iC#7FM,I]%L'Ly B,07ۃ*IN˴| BTؕyC Q?v `Q|){&MdqJ~PRi2?+=9&>;θ_S# +j]9oGܾ{E ?J4e8))H>RZP& UEUBҚÓ= Ax]d> tPb#NhpYrNDV 'Ɖc`B#1q@‰i#BPHŠcӮ铛U}>!Vv2*%tGJBդ\Pʥ I+s-oVx<$gG809Q LI0Srɧy)Lt9Lf&_ȣ$dFddom2 d(Y5){ F""'bH>i|жhݹ-sM:tDӑH!b`$! YVc(B^< ic/='F @Ҕh2C*ؽ|!R G_i'bH> лս/Ǻ瑘3rw&4EŸ6O% ٢1$:S,Sldsv+UW +G}~F!AR=m#E31m (z[)街var)l$=9K}h?5wnݻX]qE6K}W= v\czg@ONj=DfSAId$N'o*.q'a0-7^ړ\{xk7w\hobË~λ ] S>Vvxk8lZ^G@ld__Έ;oK*po_ﻛ{{}ukn'8 |sI#"o`IE@01 [jTkYbj9°#a]dFEItl",vV羅E[|l\Fd.Z,4$G1?_AJC?<#?=sS2m$(BE;v&;вH']]}b%5yWN"lTs?Q.nINslĬeLh8s7?TMi-YZK֒$BPz1!K5d4!5#4*MҚLu4#d#BO%:uD|nԐQJˉr"%b1P"GoTጀ>rNw?`밅S.Kpwl9yֻ2mOaC961 Ɣ^rs{]Rri~3E/^fڭV1v𨂺AVIT4Nckȗ  ^~u[;l&bi .P&dSfĈeӷl{Rւ׮Ͼ? ^{:o^?/]Үmo?x6깺|oy-^~.*[$dnE/; _L/[ۻO0[vYZϡwu=k47C)D+g(Vi)~Sh=f4ZT%*SV̽-xk{aÇ+{֬<=oLFܶ7tL‡q5C' %Z9K_ bUW|p{0i &"*τOHԔ{=DksE]?X20km#G_EC,QŇ&{※v7;b[= _GKd1GVbY {n?]0*.w  4xߒ\F:e/ Bs:[] ٍ@tKɯi]0TgEZy(e?m!m'q_fe9C|?h gvj8)IWHߟ޼߅"wߎzt?q\U0aM$ 2]WG7QeCsS!^. ]?Z% >ɯj2^(\Jc+ - ]> oÐ}T4ZMsS]M,xn?\OhLd0*Ǡ۲ւK[-_ND[IzN1ɚQL],bףq-uȼ!З׏'7>i'\mͥs>NmATJ3|d K?NjZa9I\o^]wqTLc134{,㤶rJO XimЦ@߆ӗA C v우3e%l:ߞOJb7^JW.w__|t!SLc%#h 8р}KnpEŀq'53DmI@-1R,ʅm(#0Y8; dvs_bsa.z\(hc߰}My7!(65j[֟;5Di_?A ƏԤɕN5O} $Qxn BSո.r<҅KDX7%_[9߬ә8SZ;p m4}ڪ ɟd-^x\ޫo/'}~@=d<ok7\ջ<# lD~I=H f}RҘN4āXbTW R h&:M@1q 3Dzf-6I.ZHo7~b4Au˔eE5B(,4Ft9h˄$hذ#1hV6]I94CC-zLqphQla[)0> !>g|kذ bhy h3j=W* ߜoiw]#+-ssқBrwY3EfR`J7cI,w$a`x@b (ڐˣOPlFpA#nwQ>ŵjytR;5ґ?^p9Jr=<6O38(+8(_lD`T|[򅦓 wD '0٩:qKe 0h`VO5:X-?rQ"9SܼI]}_$Ԉ-5IəeIhyH"ܯz=FohE荲$+8X2 Fi*fIJvBf]o:Ѩf\ &>^)Jvp. ۈl$ ˻p/ DG1lCf%?͠3Do),2zO4S! ʋ 9ТGH"jX akR&Ui@D )G E&(ذbl!r <ֵb)fX mH?7mmNW=~w= زmVkې"Ftm5N:ZqӰa4lN:MŽb ;`hTGq7u^ok&}BBP80:t0 NE,i{SdZoR(`jqP'P.ҥPIPV"ZX v.A<ǘCIӢq_xCqD;(f8`=&  93ԿZ],Ɇ)JJS;Y;yNv1)o'%Щ6U]k"kKi9{i k9׶KBsuOFVG8|)@ h80 n3-HJdB}1#9tg&Z)0ڶu`3 d] EVDk>Hs[emwkCcCap RL5 <ƾsb%tC1ʪUśQσnMC [(lCmf1C.UGY@1AuQǢf)gV/ \%6]/ e-}h/j&I90f6]lެSMkҏCOBu'0PxɓP{DόfZ"I`oDŽf LV?MgF!㲪n1.{\G_3akřcZ鷓xWMP45r+0PaRҪ7$TJ^osPEgbދjV-bW,([]I`XY՛vZs d y}_Yw5LJ᭏Ы}v7Ǜ^)866a`emSڅc$tVI*HX@ CIL[۴~b{Rgre_1b7B c) T|Ze(nkx2_9 hydTd`TV``m[iph ĆWDk{7rVUSpվ/X-BpF 7 E J -.NslQi}e˭RF+dat.\z9Ԩ;9 ^ق#8fRg:z}c:_`OF`NEذYM2eԊev7̕r3`IRXLZ%iiY$m%#$@ v'`Q…$-F%"_<#w-e4sz yE4S}^J{ҾiD|($Y[$x#xnNs6ގ?zP -y Be>wk}jpHØ?< Ӌ~5LGw=L;h ۱>/PҿP׎zdyџS[m#r|u{[}}YԽo~;,&jm h6TI hz/߾ *fU&z!{*qՠ8{_Fjȋ|r d. !sWiӢ$޽3Exu kCL1 |PE ߆?[?_l_U:]C+|%q"㮯G9dî͖? *^hpcw}6/Z^J˫oЖa 밂\>~yVc`;|l--[m/f/s]J,wM}"V\T}? {5~ ([~ٌ-F$ Z\|0{<qH 5=hsi4ߦ1 {L⦆@22VyV_gKcK> |ߝ};#?M@us }&p,3%5@.9eiw23,v2<7tIXC9s{\VHVׅ},tG r!fL̸ 8p^h+te@镛ի?g4 blx[pIeɐ9DL@g@f4B<>4Bj̵3τ6 #6 ;EFr[CaV3sĢϵϲr_ Sd %BhlΈO3^ lQzfJYTh.̇#R D+g+fvK& $PCQ(C2>V<3W1/ G#),` 4<ט;qz͑ n4`ZʢT6W@3C+;xĐd &S*CFf4"' U"dFð䈓hFC4\0%Qw@pcQ">R<[SSY,yU=^# kQP4~ %UBQ^G[8HL%l0Zjͼu(.25Z J*V )yĩrEJeN`B\pVN 9e72XNIN؀l\"pAڲRU@oMh6'%x|FUB%X?h&Cv`cc*q&0nlR n\b`D4K -QB h u+U \Er IIe0(*TUɤ%+Xj(T!K~ZуWzX]jL^tѣᏚf>xl99 1jL'L ^RP+D6kо V85 I@)8G#QQڂ#r f`ER{ i99ZoEAP9AI!4= _Q*D)PNY 'acGE `ޣqr,9=vJ׊^; =P@0\yl6 ad2W-t#DZ%΀*f),#!CAV*2K4 eCN'#{h$KLEZ?̀GJW錇GR{ -R"& Z a:h1FY"S<UPaWN(H"h]g /!: EaNPgZk #ao6H~+fKDTђCL͂'HA\ڥ.s@gd4m"ѳڂY,kؼH^( BaQDv ="3-D'Wp.h2(IbGǒpQ`+^d83G5"V#2,$4kj`^^fQqKVb(l ^VDZG@ 8FT0-h.Эnhz62(kN#D9TO`W`B[ÇYZؽ7 &sfZL4|N2S6[%7hP2 ^ I8:CzQ 58G>JbCb$$#E2X RBx2 #ts/ý.EB ΅ or (R)J%ziif0X&=ILPZb(2X&,ҝ~[hJF, ] .yutPo6 g2iUM[bRD ^+ {d55PTI!s+&<*"u?y^l1wtq~v)TW!H~ї|IIo2glgڝ.q;^[WeVΧ.\jhuџ_Eϔn4iiX"JqXqI} 3֕mrt(,!LrT91?:omiChVzd Go=@e+-Q vW'uA[WCkar)onP*Ѳ?Tz[|зE̪L[.%J,+pO'G_ U?k}wFk=Rz"AXJݟ>Uɾ1^F{v8ar.#+8  XI׎,c9tXp- X b$`ƒg=B<<Xa`5^=$ʶ5`iGVp[VJ{7{V X{+G Xb%D,ƄR, ds$` HJ#f,1f]k~,-kiY'4HͪǬ'XI& X.=<Brv`npRMu$`$Hj)䁕)gdJe X'ciYyOLFK#X i`y`^~,`v4aiX:ŔKa;KzҎ,Wг+h-3R)&H*cZiHaW<U ` ^y%-jqN -<*?g o[ۖ|B qߖŤz/R).{ѰJajTjiP)qA,SN'WƋy,_?_NߕYY,׆N'eҩ=˧v2j-udԓU\oH9m"v[![P2`n\VG8+~to' W_nl@<,&;v>w;?&L;wocVdѽ?;w۲<ǵv9:sUlԍWeՓ7~Y /bF,ۏMU1wlVաO~9rSx`ZR)|к:]+m^fO̵u[T~ޜ_{k=(1NZHAX7.%C4Aw[E:q֒Ole^Y{F0fYG=_ J.H=<=uܙ.ٿ_mouFZ}' `Rֹ*ҹ;,6DEjFg]uʱйm$s{H5@UgHڡۘ;eBbnLG4 w_:*կn -_5گ}#fy7&/䠀W\LfWުe':ܗNzY2ܟX>?eY܄U6_<[n N< ]X}X;/uST?'J롻y&?,VF/fno}~9K"h8=;;)[+?{6_1`nd;؛L0wҞt>۝Nb%˖mf[h$Dzjpd" dn]Fcb`kns:ޣqT>~/Jk#qr-4Y 3!x6ˉs9<˳M4A4E7o WO[F{ཙ.ِK`@ϲhKY]|P@lSgke]_M Ybn2ׅި\]Y~+=.|$ⰶ*˯"l%c^Cb*pX;0~uf[!M XMLZ%Y%*U3F`3ȱ4UYjxmlM\oB\ qG|- ހk@l[~^оR}'%oM^ޯ#ThV:=wI;Eih<\ iu\m$_C!=(Nn$00=&xݼt?k~I zLx|؂iOf)z\Ib"~6Rی;Jĥ"GPg fM.,W"q.Z֩3w5'T%.٪eo'7֜R1l WUwr)V*ֿ1V!IUMV,(?z}cuUc}Y^i_iޟ/T]pozOf.ՙPA.Zp Z ϨޕAw`@8><)8NpIX=AZq;7JIK pk-5U?Uב4C$1:\ՒjәL_';z OvGeQQCh0W/hFbID{|:©+Gm C*um~":ㆥaFS LTҥmozx[ JGLHV[rX_zԞz|]m@rNs, yE;hv)U1J Gp#x ==.^\K{@Jܮ(o*ǯ*?rO5rIRH{!Cg[FVE.kvp9 %u4p^!`U7S+s'F2b&QLoTX#ul)j7 (Ē~!!OS%kc<:rH{Nj2{C%7!9 m\@߄4ʹAe] |1`;O8$)N~!e^96YgUA1LKTXx4=yJGɾ.RO:GOA!X3eV#qXپ)BF3l+2.awJ=畽910fK؛D򠂚:T]5XY^!X8j樶׋*,O*a+NIfO$k2V:9evRU~1[IqXkJڪpkf<ð w36]QZ T6%kFVeT zHr#N Z"E]96f"OFvV= B,gP{uxVjYLfq[6|Z~'F !Ѱ5eHn!@M%-HjE7>hf&tSi)!4;M? G1|r?A2w\;u-?|ʊ Xo@,ϐ2ac(òxaSK-x!e]\Z\K%ӡ (|Ô*à~=Q_o/UkM%Yyt>$f{Ǣd2f)YZc}su'rYz0YYYi>~=n^fHx% ?5)I;t!(S ͓B}R%FH琦IQ1$&ԉ1RH]$H3 NRZ֧i4k._VQ PGktl+fR2W7^ZŶ]s+51qۆ\RcIZmXA^v`i},p]9-%nvev9AK]l|t-ZW~0Z@o pهkL3\+^-2IhVl[md1HZBh#a3'oVTS -U;ƦnĒ2׊Q$YB)CRK9 dBv"sOɜLKd.'dvďB#؇-?bw'9T⧮ Y!B IiJ]s;+)ߦn K d.㗍 9Q\aHlUQ#?CL1E\ߖ5Q'e;aUgE+KlrVPT_ zY|3{ղȥӔp"Q@Q̬XO(%2-_޷[ xS?LrB+jhx'ֶ}( {VOgW?~R95l~Χ&S&f]w^lhx/mChO" a:K!$A*111hh711)NXn*T0EjT3qgzmm J Uaa-q#+gc 52] J?l DXZ֗[}/nڙ؞Ǟ=͏7Gk,5&[k(&;)LoͬݸA1q)+5|ތR:Ї|w@`ouQT7mm 4_ϻy9.Q2A/^ $KQkW4_5jt7Ž|8%']q suw,~6%y1J`Q,3 Բ$ƺXP^T5DZ~ukjFQvVFV)*d1gYf*@g]؅d?|s_zOaf??YԠقK=ŻZ1@鳇k\͏hnW[H놫,:9,vȯӕT ;^ի2x1=~ m[xu]e7g?M^n˫Bo+8%<<93Zz.}&;|׋_g) *~*}hƬ _x(**fɎιNn'KL_mNvDyG/|%!5v[PjO}x|6 9 K&&J2dKFڥ;Vs*#k8P<ƄYC@beTW-6;l7ɄsUfd2IRH{!CUrjXMT*UbM_sU*SLr$Ŧ VK%a??Ǔ޾A7 ׇ5{&E.tU!/60s:gJ} Ǡܽx&$`8rP0U*U03ͅ`RJ&2#\]6.7Zpv]٩;n\l&78MEX)N[pY߂oٷ[p-8} hb.| $ӏv|?eW9o|ut|ZXx7>t\:NquCBFF$";%I! ͠k/|MS-2G*>%!NDr3brSC>*W z٦RP~:ԭ%= G_av(%J=07O_$öQly-{+y-<_QZbڊ9 (g8ǔQly-{c̺c`ލ39oGņX#,Ro8D(bыma7f;hF0V@4`TQWKUGp[ n?n\c^ό$e,}ΌR|jgmT(f8BVP3ěeoɝS{z~_)7mDK"[Zq.j_ƜbH!D[,I82BS"i"(ljB1$)&2ZDŔ i]fuVG<_z|,aoVB#/]7SwkG k˙c{vbc}+qus8W'_(nfɪҭDrw]aj̤]-TAn oyMqn{^Ǘ"mejf1IĨFDB&Ec/ݍߖݍA7b?^C$I#4eJ"SB8cc<[s=>: ]9'X`UM`5f}XQfEAP5f?k1]O~jqa 8Uwxb|G%(m@|hkETV$<͋D])QF@4ٽuP{cCFm0B(2[֣#xhHb,O ISd(S%ccM`p;D>ɼ^FwT~I\@8RB,K%sAraCY.^/!Y!*녖"$I)Xa&0E 3@VÁp/'UL>Ŕ#o!#[O$h߇ ,gh}zptИ bF[ Y U. )(F8v ]8;iqa.H;J3=kFAcz|QT3m$AyD#[ѳdS۵{9^LtF%`j68gg#èׂz,d`0qnl7N@焍nI0sBq !,ɈP7])AcЃKvͷX$҅@6.ڲGqB@%D@ LspXF LY$bX| _W);нF`$#jFzڈ'v')S◽ˬʎC2L"1R .(Qa]qp * |(WGl!}3A'JܗR$9Ki5כ>hHΆLl)z>,uh8Y ,l\ _g~֧+H~|z4zZˣ i< a OAm>#V}E (IBJ6n^8>=CD>(Ipp6,i>Ty}8g}0!C O*W)4XAZX'XW_q͖%L"^ `5hEǏAyzb_?)b l\[TPlBGGK6TAŌҫ16>Y5gN{ !0cLtC ]^1K~<Ȓ72! RJ>$Iq%NFEh{@CY.e䶋0="q} yg9#( JN]DƷrACYmƓYٷfwgiK7%~AJz,E 2Ҳ[=*b$ͦ(Q G |(GR/ko|F%&0z2JHQm7cC=;hLMy.][.[|l>|Ԅ-dbRA]()lLѦ^?ִz򘤈 Geu@TKLtOsy47v| ݷ>-p!::hLR>m팃$I:hLYfN4׏xz%$Y͸_q# +n|Ƚ"`o'.ףߛ -'Pۯo9 7"HM.f?gM)ɛ[Յ/_{ _?ᖲr]ϳd}Y B E)"BF&TY%34S@yh? \ճW>c`HD:kHֲ@jEށ>U4<|wSFg/PNӌ{ԝku#ODXiry9A#UŚF ~e9uGLPMjvjPXWHEM8F9G8M͜TP3zوW4ͧO}*F\`$]g͕"aWXӆlחqB\]hǸxtOQxpt@;hL8ZDA'ehj*&Ljz]Vժ4|*>3丢ʧw3s._Z@͘McYY`^뻣+ ,,ZYf?kT1XJgwhKE}nL}n2+ŵ.פ+tc@䬁w \aʵ1: Q`DWtTC0*R Pt}3pD,3Sc}^x>^rzK _Ӹi-w^)yElBp&9U>z~ ՝7-W\T= bWR5OU3ݪ9Xu#`̪:Watkaeإb=iXû젫[lQ_+ʩ'?џd"$d:${X I*TCH i\EHJ5^]ܬf~],2P3m0yL8$n0 T:f+*b.W8\]kEx`o f3| ߎO.#ēn]+w$b *yr%&sow!e@C٬9kpzK6 Sҋ^B.b%j6@*F|X2+> -_,~dk~S]C]ME8M(x!D"zK5j#n5\Am(W'ا 3O[q+K~=a"1AdH + | _3)pܘp\L)q0^+ $ :hLZX1]?&WO63yn_~֛k /lp-G jndZd&[>e)7f H1aCӆ}ebs{r,#5d[E(ޱ5XϞ9Uݷl>4E'HT, h-45Ӭpलhm$`\[H>íX2KQεgq21yD+R>3_x($P!;|( 7U b#j&{E,'kٶIgjCF fU A;hL,~f9,aaTP,ksLp"fm>T+{R{ɱA8,чR= 4 G34"g "bS\z- $=ÎF䀡mrWja'z4DIS!c=fǗ/0 U2?8Ѣlwk:zqz3{ٛ|3O]f;.mnpeߕ~,XҦaC9qusO'ݻZ]jۍ=?~m[b35[OG& R2KТ!fK/(+p|”Vw?_gwqv>_Wv j~a(5)?Ljmޟ moe{YlW7\}~?/2f;}giFw>6ﶋUawmQ\lY E\nvbOvJ3O~vc[voO⮵Tww/j\򇛛9! X^1e2> ÍܚlqbJ(.*j^~-wI.ܔ/ٖ]mu}k?hewSъ<d/6]׌:t,F+ b5dCɴ),?Wea[~';'b &8@q|nY~po|Vb0<żY-ӃgxSUcS~f7qoN:MƙGY@g\|_RON}EW/~M^:ø"b_Ly[?n{o4xGgpX(uE) [ATȢPqfE͒*"cIRj%R|A*f-4oIz6HBIеir K7p<:x;Lrj'"#8XcJ)\ f~r>cq ^ב4ɧk.Xq3T6YD9#l\1"N&*duj_оFE,rY-a)6d٣<AE %ZY°ͦLJQ@؄?eʢݦ0~eSio$~xG~On}qۆvyw];wi?/k2i{ qOǍȿBKCqpl q 3ZZ\rERZI@p8%9tTM׻|Xmie^4zVrEYyǫh۸ʳuLqA@SWU|yӾq8rn{?m֝W'uv(XGzwK3)DO#Қut(?ë̋7כ}=ŽULwW]lf]<;?I˧~&իJ}#I.676z0J4.I;+g5cWN!ɏ|a{}f̝4_mMY:,9˷o%".aR|T1 hDT~Tj'+Ci\|l=|99ӧ6/9u]ݻnb9!be~y.0vݰY^Ա'Mw|YxVos?^_?Ǐo77?~@.#x3;z C =Z_ZX@|=1x|".Չ$A^çb0nfvizbrV6+3`Ι7bnW>ܕႶ*ʝ_rU?׫u?UmN']V 74U鯗WbGhOϷ0O:O{5i /ZXgU_UkY+݀1Z[K`|EZҭVix5hX l 1n@ {7-;F7B6-gA_U*_s|L盤BiFєHU(bc X]%=.߄/Гjfh_74PocI ]|Pw/˰‚sr&2x< L!]?A.vd5ĺd5 :Oq}-k ze)lvˇt {faVn;>touE Y\ ,. O1≉ϼ>ssAg O6\zRFk!xG5)4L嚊i}SJ8Q(X4#E!|ݐX 06(ף޲ GwfOgo %/jn; [GQn(xЂ=fΌ9@x <oa.jfr6Rl K[* %vUܾ5t;y|y`t1tU|qLD!\c ptVx"Ju`R*UiRLIjyX́OE:&DX by֢D~ZQ 1geAL)I>ERg2yiȮdx'̊-PC|㝈I]1a+KD$Z!ԑ .\ g!_B0VۘąpOwJ*D=-(I216OO[Vg`btOän>>m5t&E¤`i1iԉ+n~Kl:Y)`\y/dSK0Y4}6 zIxP~OL5tc"!u&{vmLqvhւ*Т: 4E4 Yyf-fNbЬk 7Ӽju1ȮS/ Dv)9[tA-׻nF>mL"<ϨC5LcMiEsZ숣ۇes( zRX n%Afw30/euDU "|EISj;F?UpV4t#i:ImLQ2Q(8Kۘ5|-;QYoc 2:g9| ?,7.w4ZאhZ{Gs!>ZL:{1IPr£!_A3D-w&PZ%Id,wBT'dDd>rwY `ޡ9& FleMJ%Ky$ Ul842e8:eFӘRf<[Jl*-RK%VD F>'M1bͮ !rUW`nVw[pb[rQii _DW=$/=VX`dJu;ξڙP}|w :{ۘBn2&:@ę-lTq,BzQ (`,GmLQ3܀CL U8I8XȄ"NqO2I&]$2S鵮 EJKNkkյtc"u)}um~lN} R1쮅QNg*ݐ㩺>Lsy~ni1J#Wy7x=s{u4|ojߵ|S,>T\}~c6>rC8oho1Op l5rՂ/q]sTo0!S:Obj8YX}mloHn?G^UBc>ІD~\pRL_I$PpLDL1Ex݆G{ŁMck. KC=ÿTXHoVB KXҪˑ!z~H_#s%oU(84B|ShHYGrXw˻$12De aL.V^dGCr=ǒ[]xFeLOg̣}I]ۻx89.uqc+.Ϳ+ATC뷤F<'5 |,$"NC Ҩ7<#4|,-0 R\DVr0r(|TN3K@Ki,-Eb>^';G99ؘzƁ%6g9px;U3!9fqcI.t=>XjZ[OjXrj ~v}'"gSNX9YH+w6a{ 1kHKr\n_Y|Q`@CqnM11VOS_h?}>R}}~Th?qc~ ^ fЀxRW|y =㊍iK>HVz=/r1׫n@ij}?>Ԇm1^("νu8lָ&q bWL\|lYZ4 ID߈7 |$|el/6ڊP hUin%h̘{dƁ$`QNF6 U8/`bAbNp^ 7<ۣ/|,}u>~eQdh*S Jj? 1kHIr.Dy,z⸮@J nfձ<(D՗`wjla80J3bDQd[(i"o?aO>&G"8o3lp(m* F?IrD6rƓ:Մ1P0 ,~,xd)cǒ\т C-x^;XY{l  pDm,՟|f{U\/YEAt(gQPc!/WqV%AZ=K$'LU0< aRSnGF]qŗ~uj4PH_a-Д6U8Y;~ѝ_hpR/Z7!KFuxO+EK |,e:7ރQF޲+xˎ#.`ercz>ƣ\,|,\M n3X_v.tLG˶*ēT |,m3V}l(Ko~9; Դf6ƏxG%%k"n:EuѦP _QU{ knOt9~~~G n Xj_0VE֒kb(H)ѹ2D+ |,9/XugB 512#(E٨L#P>]̨g_ĐMD,Jl$NWA kہ=*@<ϘG\zZ_$+1r_OuRn:vYV,nC ^qc&_FONp'B )BX{Xdǒ޹O$ g6 XD$978^r>jni\ы\R3n&)Q~٦3RըW\l z' $_tp=kZ"0tw1C=\O9]KoR{~YVWښ O 칍Um;ޅsmu>\30~j`jS4b^Ӷ9V7||m3-T5;G?| xWMA͕5z_ {.k% 擷+0xЄhآ9>lphMyYJ/m"|ց7uwe9?qɿb v(w{<`gng@TivKJӋKJ$;]cit7$V"d} qUipz) CNhִmk>ft񔜓P6~ Zzm~!]N. BWCo?)$FB$JmTG'0V` j7: ׽IW+b[5(}$򣗿G_yr |m؊}:Ǘg^Y;PyA7*~ygʯRk!tvXKzu&hs/H3g ,(I+vs+3{rb6|݋#LBv/>IKR{o<=y9 diH6-Ax0xr;bJl^|UC7>qz =٪1ɖT@:K;jH4q(ձ@د7]yBbUKۛ;&U^ F~lPdv‰+7 H4DFRI9 o4DŽV_2>s*^,%Yc,M9;kSH*+7jS#x3I\ *vD&2$C&>&(IG̦*;Q}x/@Y+/^k^7kEʿwyS= c5 4syKUqup/X*MI |,GTUqU;{rC|lS3k;@xzz?LU @0&9*`.p` b+[S\E,  J3[zW <%=xf{DAQmoۣ|6tM9xjF"@ET߉:'3-H ݈8PO"Tn7cb!۠XU4`Nl72#:NS<7 ȕWW^G΀&dL&(g`YCO$&3^(v$iQL3VT"!X L3Ba5FzDAPØD(zoM(! @Qh˛<|z[@-pzSƘ(q}1&f}1&׸/ !{;޻8e*\!On2 F56)@MJCQ=IQL5x$jn1Ql"lcs $#*T_MX'G7~"jOs>)txk{ű;ҹpӣh@iq?-c[lsvT_R_tu͋튯t,tIOg~QL ~z$lj߆4@DK>d'o4@q4r6r6 3RwjaO@57VkE@qbrUV5]φ (f |]: ڠvP[@v$?S"%%(\{ >AyCD/Iw ?rT hH}䍮5 3LXn{ T(nV) ZY.)ĩujf0sypUZX^y j2@9Ҁe9 V: =ZsW|Lc"CUF81,E4#JɘDRL u_md2`߯ RrU L9 j ylQaR/Vw >6@LCY+?߸8Y='n O!'_-<C$#>RFH'ܠHi8]g|s*FhJ Y+q0?H:1,F&‡v}z>X @ ֆtx{&9$Ӻ蠾utZGwX⿬CԿ|d/c_M _YyogyQVw7;y8\1\_~9v)V { ?r &%q@9՛zti]qB5WH+aQ:2̽5d'Js̏T^(%W~RoOD sQJi-EiYiq퇧E tqzyԪ0|d DEjPbj49Gȷ"xx\hYW;}\IM@.neQq cI,bJFn6K駪(n:G1# G$"4J9DטǃfMO_|hrn mn4{2?d֍Byv-ӛNP/go:Z0b2G6ISbII8ͨZj˹3,UDD6"k/8|]]ՌїDRWiց#0_D|^pi,pj)Rb>U4Ni&eDd's%Kt*$y !Z/h1e-*B9X9Éο%3I3_hJ,N)є {qu}]c_K,F[CVqb*,Sg'FqgLb&i ~56phu}]i_W%Wڛ^aXzL?8=7qxUSV@cV_^$KdJ`_=w*uȭVKV٦%n1Us&|~׌ļX(ng?9Er]DDto(.…Gd⚕z z;skiӲŗmXdd08s3fFC,Hv$Ug*ME;F?d݉덼77;ϓSc.[*0utGkPˋo(EQ:ؠG-G%+P&ckW@kUBҙ $w%";:=ez.7-)Zi1(H] x++"GD-΢e n𪉳XݽhѾeA:ew:M9Yt2d˿L|-vH:cE +nWe.ZMv3M8f1}6 g¼li i!״CØ)W,T3F͈άY;-ŭA:ܼb%qKjm |MfU2ljZ9՟ҔbEw SY];ǥYᆙv*m``LфVsL9 k L3oBgZEj;5G)2qUȈ7akZhj!@61QDٖFEsn6R;LJ|7/ThT=ڞf!џFxT4xyK23g]"tlQj#"n 7U)}YF3 AKhB (b#Qp/qLђ(8 ENIbp1ĉn'{#=𕺝򫰵|6ҽot0&1]ݩ_]o\7Wڼ.6e$/; <-En%A->$>#r MUW$bWbxεgd,nFd۰>ԸUSKqAKi #* nk4H\6x/$d<1dfP!JeԻs3R4Mozmy6ῶψp*L̉ӲrenI&U|V@o >6c g n\Hj&Ȕ`=6`dgY y,Hd (搎܂WD Nfq&r |6Kg{pDOS^>S|,7?+bX]^ߌ'34SzXkǰg]Z6<͏;j4ba#li=XHDpwh߿#S:h5mLz;rB'^%3?Lt[%y5[fdJhI'XS > |K Q)aU ݵi+yUsQ{V;HV"UڗY'vZ(!lzPo}YRcŸEGx|g9sFU zwi<5dKZ-3H%M9m-V$w-qtqKWcys$W 0 Ypq ` D\ť>%u5S%47.%?K3 D+! ϭ%r\P.~wۄ];JnaieyQ/jfӦ UV077KϐUK60!Yɺ%GF$,Ѱܾ(l?Zt Jx݂܂8ku(n>)sp]ޣJ #/)##T+J-+P[@%t7%t,Cʈn_%k3}qAbts(ÏqVh;lE+¹÷3ݼccFap] gt19h8nj;{1jUj.U`b`M7"< 8$\h5zr' H>mG 2hG4ArТoMAGHܙ E&Dh=ڣ^Y FԔt ag6z=K:Q,rXm ܚjeӕfv*mb,rsԢLNyEGT yUE8M(x{Js5ȉk6b[c="zNX(&!rØ N7I#2cmW9U19ZU Ix;E¡kmy˥bq c_08VԻe%^YLn=Jq:.iV~y.{ EJ2;+ C]\}h%R pni Ѫ`\NN9">PUU5l2E1;V)U=e1$w*`[6pp2y+:e э r;ǍDb'#oX$l>`3FSk8)ؘ[t+ sBDs VPAB!et 4CPHq{pݍ00+ir4j \(I$ٯZY,R(Zk.uz}*:io?A߮>o8gMʅS(xG]Nx `1,$ DEb#_ axeWJ p "(ϐ6Q!a`p[fɣpz(y􂧈7MZaC bH4mᠣgQY8=<(i|rU/YӱJSV`U9]L/.ڮ.[7*b)QQJ}uEb8Eb;VJa *&ϩ>"҃^^†t^dVս[KCՆ\Q6  &=%a;o1|gU QJ0VtGeሳ"Up5U#$ΛM*WjM2=dv[IM Ĵ/**Eϳ]9$FoPbca|#`5bPY,W\,99£pzH#uJ.R#GR#å+$,6H.J\xISK|Ӽ*ȅ*T&0j0jh]W ytZۼQ=ʅ T)FN18rTLdbmRm kPՅcNO;/w^SseDZHBrUT=VMƧgUWZ//u%pL% JFR1G{+ İ\MJ*u9'ᇳࠩQtH[*d dޞ' $KhMm++;E׋'0]{mQ`p0wczt\aGw1_ oru8'wŸ}| io|LeEGu>~P?(^{og<̈́ج }=5~jnp/Sä:X{)b?\s{FgKu=iZ}qm>e %cGxT˜Uڢ+ F­.ZCx[e& orNr2%*Q@xgܻ^PN2 N7</*"߫~Ax xc||;_iF;^P\1::t7gK fk=ϖ,wÊ*UZ S\>R Gx|۶~&kB$+Wjxen_^ߌ'[|Q-3t,Isx ԱJʲglZm{UE$n|ͨwHQhc%@('>.|QY8,'E> a 1gTEU_(( ;s+#p ZPC8מ<* 1ЬQ~QfQ>F)Q%IKI{ycUs Z,:?9ODChh]CZdpWKYvq9~k E+NP0쑡x3V1&c]z.؊ځ\ŏ]ڀV퀴\QY8|H7CSe!P'XGep^%,Gon']Y=dIc edi(2otr*0 tC‘ j*hL:% !, s},yT:#.wjm˭X.l Qzh+\" buuPbRZJp]͇~[~,}+.}+|m]{hD~-W̮|Js݊ !! 9((,A`aɣp^8֍ogqFrhKdC68$;dg9Hr#ߢ,?۲nJxq]$dYUWѶ;"qzFW\bhbG"%\(a5TRP|Je.!&^"oñΨ̙}n5iYh 5 jB> e[>՞IUvO Kt+3F5Zك#ڷK%_\$Me~)> oJ*/3)-Ic00zT݃{Ƌ|#bzFæO#[1{5W[yOP-`h#[4tPq.ϯ?Q CLa1 :9W1Y#MG=#'?()mHR$ڦ<*N̩ g<rL2~jyw;92Qu#ܪQ'v3XZʂ+)q)tv\'NNyE{{}D3GO'V'!2LO 8>4=jMƩɔEf-+ IU|_K8*_ 9E!.b/?Q$AtBj8%OP.Q"zq BȰ)eJ|D坨L}􌜜LT~#PbR7;|&wKh g$ M>"PT꜎>zFNʔyMƺzF KqyĞGdn<vU' jRig 6|n^7_zFL26r̥ }WZxD>zE@ nעzh{FMJ hOKQ&K=#!-uҖV%HG'DT_cNsiyCNH)vUS>JHL->zFi9E9.$`ҧX洇8c2I4Ͱ e>zFNo#Do:B9I#kAU&D9Xdƒ]ݨEf9FI2c;,!}@܈qkK܋ 02)ȥn9t2b!r l:x:NZlb#yG"P K*saDo̅1u~8Je u:ƍ@ʪ\ݭՌoQVs0-+iW>[}0%m/v7]Ǭ֯Bo $+({كׁ"!]6GCYMe|&}@NUxل ws7-Ռ^ͨ]Ǘ[w=-Ek[7,̃7ճbna l]Wz묍}c:e׋ y1re¯{\:xs7o.n@{ bl?‡0X293iNj𹄼C<~]δy|dbRMG#300$ ~r 3Og?ժMk5}S-p}?j2lD#jLƲY>FkS\/ڽO }d2 |،lQ}f5ݬq7C?7o]08]]^bN Pd0*DP,4ƚblxir'dA|;Z5 9 */e^ux_յŒ_ EW;zfL"\xA˱^8B IQH8c{!0;ٞx8a^LfK?,W;>z)Ja %٠iV{w3j3構i[y4s‰nkNYu"6ttɋHjZ$d~ {wc@pۙ=LFǒ Q?xs206d /&jvTM ΌYWlbGmT䨙Kg,Ddc ן_r`o, 5Ct] jɯdA%K h-W9 ;f>j'`L2MG=S,9{;aMG0*Aƣ8^dTaiJMB9CYGբݸu3eʔysU,v', EB$RըhalAP7.JjQڪs.=-jgW}8qD%re!) 5 t!L |;^ո3y<8HFGz}5W_55 mmpZlOPW3m)S8p#yu  0إ SN3pD##36p0oLoԯa#T!3+mF;jf.W8ŏi}n4hAOV W=m[bhWFɶ!wu Ye h:Gvg8kKЫj_hcxШ bUkPkw mzv["y+{[k#d3яGͯ_5h }ҔBގnM(Ւ]A~~K>F~F2|98<[롮nZ/<~9 nڭ2,eS]l7yдe:1=NTn` ';&7n[b*x#z}fvYe(F:it& hK=wg3~Ю;6iw<θ{o5񇳼?2dZ&l~:݁Mգ*|æoϕ!t.[0&/{pB [4A`92~轓VTba)X܊gsl@jq&2$~)p'ZOM3g{g3A$BXG`w.0B*'ʨ4TY.9b4~¸[XWW\9y;8h1u= m8 ).bw_PWAaKWQ/۾4yU;Ye,Ĭ0]pi))(n8FygT.8יcX11Z}p5y{-x-Tėq,..&blucK}e$)aTnAsg!g r#L"aI qJ0,ϓl4OCߕ~O /eʗnwL 4^1A#"dbeZK.! "( Xd+)Jb,VY[TdE3G@H1y.w$ys*510V&L 2+xX5| smmNQf3ʩJ XFVHas ρG@H1yem /:3!,ʄs4 80)&Oicdȼ M¦}sm!E@H1yInU}Ȧpޓ )$s$Csw?+gRc*:q_9y(b-u E؃ ,[<ΑwZSEdBY25Ǽ0%3P<מp`T xkqBYɆsغrXq4sw/pڱ_vGئJ=c-a7  cYFXv%<%⊂RCyHZ0#DB&#{2<FAZ%V9JP4+lZV _pe5::tYg̞n*|yooo&{>x뺓>%2ed hdQlll3QWIB6qܫnӈr&]]ʶ8>~r; .V;4Wq6 kgBHHS, b6yܛ?B ?UY402! RJbąH'C6&V 5WԮ(KG"}%5"8",G4REt0Xi/RN*$#֊DGGnJ3PVAq_ :||әPQ!мW12"t%0/7}⥷#._ H] 1R(eGRE3{fA_/Vz>Jw/Zs/Wm+\N?V1jCj_ULzSDsi- 3t (Tp Ƙf^qvdCbBN%cb]P<66YCt@.J籕HcQ`Gc& !J(`Qb F82xNN9"`B 2ϞU^r b3MTG$8B !BhxpO cER0#(raK(Ta^k^K Rr'DHcڗP欶 U2$M4D,XpѳqӨB ՙYLVH;(O ,u1M}Pʺ dB 扒6I+GtXbfwPo_Vϐ\ ,#{e.$#zD?ׁ~_ΤUxӄcx;Y EB"C$H3F_tÛ9%ek{S&](8i$}suIf:y=*ƞ1Z WCq)"DH6q`v (g~o\0z|v# {o{ Qi VB,bQ$TN XԱ AXcy}]m NA~6ɞƭ^KZY_un?b!X%EŌq(gއО1?5^#=MVT?GWc*{W6w&H{QK4ǾjmoL'#t3PNZJ=F4I8;zucf9q 7- Wfl \uA&~.pg-WvhUS!z^[3|'̴ _m%T羽4U+i+ÜͼWzك mE $u c-$RK"NC3K9]>Z._r3ѿKug- [_xg>P?9yD\]/psVb蛪ήZsD?Վ4;J̞1)i8L-tgn,HY񁕁y. owlHN),!.x=ͧ.? nx6s1LZawoe3 }!MNdm>giZX DEp<(. QJ۷eel{|/oX$ rԤerpisvaFͻ5 .~^aCsVEvLY>iwS";i.pr#j$aݔmJzs.:&^²$\on^܊%%LUՠGсKt$>~18',EBƸKƯ: &Ygq cxTJ`ܻeޫ5@cXz󤤕z06!M)o/ Ld_hfZ'if?J(X{FjDqa'߃ķ7jsfy V_]X!DJ4"D\Qiɜ3FY4\Kp.WC_e '1Ycҩb_rb2L>jQ}J>UR&r;`֔<U+%CrUi=c|DRNZm7R 1DDBq(NLNKv1/Wrv24\g7@( zyζ< -q(6Utm='5[Ce W} 8aAtr~ЃsjrNBT)a s \kI*r.wmgOgV%|ιՍ6HztY?PSX`}+QvXXI@3NF(\Hlx Z[Zk tAbNQf@y2qp L37ގ.?og|0,K^`? &Wm.HJo>?!g͛c *2L7]K=c|zU%3kH)a"q0I#\H)PIr9E҈8 Y4S" N* (`^];,UڃւsK" (x{,W/Pyg0$'0q<3j)WW`/C6G:t FϓNi+O?R}۟'%8q lYt4zᣅ 0LmX0՟O7򝾻X "|0"8s.0T" TuVZ=Ԏg ڇM-A廋]i-` ^0z oJK/W2}=X jf5v: t:`ico '8!"\l|N^u]w]/増nȸ=C .'Z@ 1&>(VKWP "9AD;p0 d(vj՞;ᢔPfJGl5e{U SCtj0䆋6"]>`A?]Ώm~8:^)?" "LduFďY5]d ^wL.¤NwS!ZݻG!yO]Y_Z-4LgyW1^ʒlxօҦ' 0N~HNw%'iZ [)X[ܗPC ’20(Ry(R*F| 0 ]v/NW:c]鸙}a/&׎|:4YR`$s޼7zy#_[\\#-O3g+.ΧsE[k!9ޗk-q[’}sz_j6\h"/ov=?)jlF`Ӽ} $i g\\ UgY12%kʢ<e2׺{Ƕv㸶} ?4_bێq^NuǺFw0ʇ<; ~ eOqG䃪%G~L#4X"Ph i0\+ SrLO{G P'g,@8Q=ŐX_HN'|1/ d~L<6ٞt%H%$\'T!PAe}1kuۭf3nWB $D$DH( _ ?RsY0Q! Qݾw|qT({% i@H$bju3ؗ=Oj!OZG+|+@7 G!ZDD.mpdpI$M0bB"$>Ў/ ^gq;?iv,ZR'>P'ڏY$4>FdTJ/_KS8l%@0,}?~ڎ;][튖[ "1IVaE%8,I XV:,"_d-nV `$+k--.aBE=.f@К]BJ{t%{{֋^\ܭoqܮw[߽f~o6ҭjuȋtCXrIeN)€) b0BiԈ0TGFЯ_H_`I@ xhn:G>h6ADsBsVQ`,#'ޠ(f!/p`9?Bmt%mﴚquJ$6vD )lY@i6N:L'bSr{3_d .We͹0/?9{nXJk?޶pmw–Oِ`J9a^r N`C&ӼQF~1|<ז Z7Eşl[/y |/P8um^/j_>uu~dZ:,f%33g^qSy^k$GȄ*0*A㓉['Oʷ+i/?\T}͚IXdKp|hH;9睛/ ٺ'XO O>ب M z 2)ቸ$R` ~9{#=ôa턢 甿+z6c`M1ʑd/pz8F D(")]jH")00fhI*ֈ}5 =|5{H& SiRζSMC㑷 U2-K3rmˆ~ۺI~ttf7%߶H*+gꭴLW"1*MBJdI@~eC;O<& 3G @֗`#K-P *:WPU?*\!5F PdlI@~UtUu獩KF`ႅ]퇚 J%_uy:s0c<3$O"6uχ7>w P!Ǎ~£}QO^G{,`s&dfn@@zH+ E.%&7AkȓQX)&%>JՁ+ӛR Z$f]lPd4":IςaYFx<6M}Q+Rq,ߊ6aM|X& Q "{? [ DDj))V.O8{jk:;~t\yP^z"&`kN@{1Xy3_3qO9>"gVa!V$[:p=Όs2˴P4ky4?lz6kY=R4c1Dkm`ڙ̞\ x@/|ЍCmϰ`x/fr}t&fD@ -GSiaA, hU=\Z}nmњۄx:r, 4#T˕ --78o.]ʺ (;m$N h6qrv3;bNQҤQ܍;~|b-G#f zA&/߽Ե&a3xw+V}'I Fݧw?(C.>pjX??]ԕ2qz]{]o6@_]'Z#7CkhBٷ^0O&d#Tevj?9):1j`ԍ|5Q8 i:d^/qNbbf+`pXρ̗]MC$[}kܗq_˿Cd3dW}OJ v.%Qn)z!-9X~Ux΢0TwݻΖs-{5D],r Gڶ܀m]~w{c7i0yƙWHuhwT˻cTp2ʕAo`43Uks˱\M|)ȱ%F *ȵaU'%a@&Y/B-]m ` 4] TyPpgԦCuQn ʧPcXz.%0B'+C4X6!T1|:{EL ,Cri(#\,H($: rHSRsJ=-<٣z"䴧I9CXDhB|,"k}`22Gq&IB)4䄂+nF96҈ [n w R<p߿ hj08v>尿ɍ7rn]]{ vӊ:fSwq{G欉^|ju'. PݵYNɷ 3 ͸mn߇"W<嚃+oRe!|nëd}^~ޕd,h'sY7-[8y޿t{ؐ+fƦ]Y@0K#s = hh?lz~u[_'#(r{}q3cE >Z~?>hvh+ W{}0r+}I 7=Kfo'^Lb]U(zʸ?ԹWn:MUר QA\*7v{kZL_$ԐFߎ7K{S޻y P@WcL#{Ȟ?7伩ID"`.$ Ya I&!3$5 ٴLM_y7tSA'Mu݀|+7lk`9w"Muꈹඕ'Lǻ&h@C5[@=rG_K;ƠۤCm؊3b(w|I5csTӍۇwY8>f9ؖ%GZQ,r <cYko?94\ W|egurۼQힱ~81|Vta]"K91[~r@ %)):Zc#~mol`u^;+fGb^^e0;yj4?{6_QS:- ~?2βm Н;!y2|^NW$S{eJo]_^6lEQrIh2a2w7^dWuxWߺ=|ĎNz{W˔&TQ o @ȋz7,rڑ i%AuzeW [L ّ_m\#AS3WB V?kR#.F6Is*6-UfV;]~~'~f$*Ln_H1ɓ ,L-{ kгDݍ4 @]`0^4PP@Tn,* [^ 'hVlp NW68B[U1h j (r p;(`6pٽ-H2t8SGy8|! b|\&m\(, Y0SrYb"(Yu?W\#P%qp_CsIAú\w/]~ ;%ޝ}uL$ǬZtn, `[t9wD|qztژ +o6y*l`"bx+itR߁ylju$N@\3m"Y$2. .GWT&t%PQmiGYRg7#&H6-7jW5؜jnL/3b\d^hzdEn94PqM'}W ?mӰv}+vLl^mEa: '7^8@B8.il˴)J]M䘱kzd:nd;n`NF1SX@M`7 W- bŜMBBiaüywtpwzv_C^o5O!=`dl5+b]-u_Inzg'k?G'Gd3{Q*2xdp٘[SNٹ Œwf{ "Wϐe-+Ck3n拃٦/\G~iyxF"m aL J q$`_}KJBBBBZv~.%fY G Fcn?ܵ]Ss p a0 L3\5=0Ya9nh:~z8i:iJ;|*mv`sӻUü\tH9mjځD8YX8;k> Kbt7ao#ǧ~6afX8+ɵ<(ktP&I]uPykRcBq@`̫(?o>H%/۔۔os W% zBZ>ا~% 諒졛D[lX7{Xu!CFrޑLܟ0wbC]֪'vA¬5v33u*`x5Ǒ۲N7]83C~oV]v ;EÎ 1Z6mIyBģc#GbЧ0nԱZQ) QP ݴcW1TY(, -KcǰBt' ;5H-'Y~)vLA{[F]S"_⠇.ÆcH;Xlv3YBM*"G 6*93EG#N?6W<|'m=BKkrȷa6NV A"wf{^5{ oRJh7 56N]ͽVhiJacj6qB= (fS;(θSM;_ըQ5 U[ "ņ;R8tS%,-MЖmCAy@7-/iPDgocr{Lۼpު4\EQAʱig)R^Ao5^3aҽJ$sS5{y;?9Y_S[,s|xtr?__]L[{P&4J-0w 1@£.pcD0᭤zb4 ~8_Gߋʉ7h_Q #\^Ԡ2\rõGj!2[wܓ7an]hؐaďKN=t.b\SW)SSB R='4BOSlA`$`̰cL7|7Hۍ1#_PM;4 Rt]5J`b(LwcɪT)6UscCafّyF[[t6F 1ERfᆱbbhЍAT[eĎk*tx!يU+$[q DZuwobR4e%L[A-6ŜB瞒,ϯ>i:]Qs 15Z r ~1W$)H@h,jC_om,/e)=xlc*;sb);;D&#_NeR~5{$ˀz$2yfFx)CJW|u |3/q:ko6p{w>?>ҋeiaHS4hGBkjY4QVyVy7XH*> U*WB 9Xb.Su4ݧj4EԭHpOJ$7h94`-b2QDUlw@Օ4UpLPMf"3p@ )fe9ĊZ??zGks=W9ry@|%ϕUJ7i !C((PrwaK9UxðYx ) T=Xgjzd_TǠE%`Sͻ6;IB`}ȓ}Ye#T/S|K[R[r \+kZYVҘKv<8epr:tc 4oOEM^Q+Wz0ۙ٨@yqӿ>l*նboAA].޴Ud [QhU[`B(E5bP9`|_3RF$1 -^Ɍ9i6I/W?#YS"xoxD 76MЉ (ѼњcsR\:KcƼ--XV3Ϗ8wɄ+4& `7M7Wy~ ԏ^6q$C_P=yJ9dـHDpFd{$R,z0@2A:۩*Ġ·s M7ɚw_0kٕ+jqNVvS"ݼrUC,X.*G^f^\z1͗CN 3ᗽ Gn*˓_?xzg'gov^ )QP +d붨c*^1zJ2R=,IYؓjTkh&y^!*ٺ"{"`KpwAܯ5dy?ܽzQ땕sdPLf`^WָgAkګo*=,^/gyT A㔜`Zji& ~ᇺ&P b'NS4*:ELJgNK,`YvkJ»ujt0y+rFDY*ޒ.D @߾ /Qq$yZNzW't oڹ?~Io[=`!za 4$~h,\Ge Gk(q%KiM ؁ܚț -,ڋKإc6cNPw,50:erW,DH1ȵOҫ#e1{B(Jv'%aJ{gkڻ='X|T;plEh`5ŧSm芩GhK1yK֣էߛ [d +Xm,gEF{m;-gLP];4C_ |Pi /qVG8mۈ#_YYdJWU y'' nޜz|C]"r[q?Na&j " X2}Ƶ98P˵4Yʢ*6!i3/P`.WtGڭ@فoHX/Rwp԰L.:zHp5PC,Km #/Ik][U-T]Bw Q:Y}0K>wG|eTWxA Q8tq u"9=Zc2[NNGEҝYS.}r4Km OaקL~nʺ~Ӡ=_TP.$:f[^N - Qvl+0e.$PLU@:M(+$2q2VtḺb x\xe)d.sΊw|ٲ3i!TK˲tK X,r0/v5.mV T,SaJ)S@G7UjlXG&M5D(|* :YIbABkٲ4Gಽq s:Pwh4嘛- I@__vQđWvZ] l=Yu~,C$pԓ:}YcqKИvOx4$y ˘s`k/Y~0 ?u׌;kԷi1Y4r$6_-R*26]!X}y!-MZ"55BMXQ1v Yh[dԃF*ӺQ?Qj'bͦNi{PO0C K,9<̶BBZ ,޶YhR" k{u+%v#aQ/jQ6Q7m&rӠ%D]RBXӏSIV屼[Kݡ3tW6n8ptwlߚ|Խd $n!#{gP@a57B[KE,=P5e/Ѕ+Dؾ;wi]Ǹ+ u^\!{/~AqӴ;$aqwFl=[wqX._Q\5+$ێs_;oo<|[}UgpSo.uwe:>oV) 6,O@v0*<_X/N9n$tfSۤVo[~3>дar_0̵mǠjR@VBi~ /hB7\b:}7] X4雾p( BJ~7 s  թAj2sN :I8 3^V]l,T̷ ˴7-y!t۱c.qL7,'̀۾ pSw`uXmH%sZb.ZF.+:ˊeeSz%eH躌QYIKdAEqm}3H=]_KWF\Q|+ѭ<>E:$[]9\Jv >*l8orkX6M1E~ 4&Mv^sb|9P^wm7u SkNMm^Ft#s4իbLߧ6[^6O'6.lPM d~F9c =+Kꯅ.;9h^x85@k c""L˰Em|#=ײ#f}`cZ{mit.UcK;H! DA,y[Z @BGVa}ݣ7գQ[T`@WBZNIxR0%2|ZI'ɓTdh1P,V=jrEeޱ0Y)d7H"OtݸPS*biCHZ6Wp/K}0]%!i1Hʤ |߱STHRM6Y,K~a~[:Lmg\5pyp?LOf9 F(N>GD$bM"]Tۤѭ@}9- { v|Lۤ U4vM9M/>4/z"4+<&V\>)'&' z)|A985QءZvBנ[ЅON%!Ժ x7N%_󀂳zrX.~xY޸xC Nb2OJD g]0|G_,-5W(6.2K5H ލMg6D"pkKbZEm%oD'#ГDk xZ|SG6K;)EljESq[JžS!Б% /zw!cӊcí/QH .ڻwX3p@7b{jiŃK`,i^AJ0jF e25{T`R n^n-9CZ+@r?]-׀0GTI2nx?i2Z s'ARv!HIrIƔIyHЉ&VdJ|vW|8i@V[wڋxBM]ZVvIT%&!fEʳʎaZ`C/S-[eC!{9Je+/#. KgD%PS]Ѩj,'*A Y|ZVYJEYGJjiQ+徖KpG?ڟXܜ'X!Jm^ 0W5U,4:Wm9Q-!,Wxv$LAs=]?gsfvHid0\7MjR[znBn;#uƨ7Wq9 s|#][~>B* Gn2jDԀn :ﭢlݕȒ]nfv^xl(wMsY ~h4F5=1stqǴ um˳\Kgs/0me#ӹ=^,pl.*a` n]j{A5n3Ff35@.wA׻0c?6=K>~zR~x;{omq,iZM@h3[@c$G7͈  = {5vNNe{{.4 E+ug$` gmw@WC>c:aT*ίi= 8X/XħK& eg!X/ Xy(Yac%&u}mbSZpD(t_|mH?)ߠ $ Bd>3<6hKkbY8=Ҏ֨$hbxCz"!J-MtOx ]v66QԳGBݣu d j)q- zV{UuʹU?k$ uAlsBŇzQŽ]1KJKq<"ؖ AҸi&f!]@\xrNs0p}L2d6C.LnR5A~r+%~z;"WxR8^)(&|A>OK\J>?DH'O*s0f*$*ol? nʀ qLKu&79oq1yCu5Z17K^YZJ<(^qA<볎L79;L_uxn~%;q-faaqG5=Ȉ|7rвlf!8exn2ˋܿ ]rJ}cl(EugʷJ>Ӫ^d$f3!uCU$;U}#@aVGY{7."Lr0xIY~ɨl?Q:o9渗rL oK̋4hXѻcD(h7i .E;Sξ"ƪ!/[3+z1K0Tkn\MpSl»ӷT.ߝgn KҞKix??dD'V$΃y]`";c}}}Noغm,b۷w'LM}K< O l;|3p+r溄=Cc]zm{ͩ*`QؑI(m5Bl¶DKlł\ˡ>5s-&\3_Ed >a԰M׳ 3TaudzA[!ᅮK؈ KE=E@yoB@Av5( W{7FFBni$ be{́F 2݊cY,vEzͮKw*4e*4e\| C$`DUHd-\B ^ ԋc9@ f (ъHFhG 7 WuՍT}n/݆ %F- K{鄖; _z:b'7pZVe?潗>a"\r5Q׀M:<ĂX,_/t6ŶDOØ sOuBXB-r='G&}ѥV^C^ ̱6ᒒъc c`! WD@ըʛWǬ6vmEa /K钤nl` ݈6t}wXa!x]8 b;*'cD#]#"cW vlgm.siތ 6]g"+}rf h'_lk>H)9e0)9[&$uQ >%Y/ WQYqډ jҪ0_W?`*#$ > 'SRz'^W</a-ʩyzJ4 &+dWbZw",ci jPKcU<#F?_@ږ YY͇W1YQm?շ #`xDsC{4Ԡn\L?d,Y,kE\<}`)ÜPy@}ZM$g4^ 4sBEl.v@hY" ubh-uNkCymhǬ>IVa\0W,"[hŬBV5˺VeG>Ne|/ɏDTy||L~&|l͇\&C݃}T@CwFj2rއ{u.aܬZ?6iXK'RқҚ(Nќ+Z} _D} pjw4}ܭ9<@sFVd\s/?ʦeZL#,SuAy"M72#[\{ca^N#f$( 6"׏"˴,9LP,lپ^UҘBQ&5e9}'0"?L/y$thӵMKPޕ05r$뿢`=/^ n^da&|A@NY}@GnƖ*UgWeVe4Fs=@9B|{|zu~=9Y!BNMOķv;tL~N$e/ Yu*oGIu9oYO1tW2y5?vxǭoa39.o:nhɦ zh՜}}F2z}sXs1Al= DafPS vւ ,!ԻJ GALZ+)% 6 [#UdX%+5o&0(u.0: ӼU!2-  YP[n|'hnoKm wh@Lk!|r3\BP:i UO73|ȵr2)GR>́*AU>I_&ɐ+RK4i3EvݞU͚j#P(5V(|B}+_KhΌRJ4>EP\~ >Q⯼ex3^&e)KVL)ٖ\;Xw6e ]2VG7d:sr蟽圚C:eBX Tu(2}*3+u'ޒi2Q)$X+f$D O Tb[iVF+JTzmu+d(VLP0-Wؼ\UdVXz> Yȍ?=yj9Ӳf%dQfGOأ[ [ËVV,?1ntK}?vO^~WS1#H2X JGXsNR•4^Zo([NiãȭDLu*+otE`Y}~zj@f!/<f4U ` Ibt_䬎U4cn ,;"~8N#KMK1gڭn"[D  !K*R*bJGQ1CBh+'(B3Yju'GITt!8dyt&Ex zl-h1iUsEG/[GGU\ك+Os]|>윷7ޣ7bϾ=^lGq,C,*cLl,S N@ )m'H#_0̂K@\: 4P cXzc@Bw IH5`1p _cyprb CN 0^+ba?B=6øbܓ_ý ~q 달!Z[p>$( 4!K 6.)SilMYn]xSyyղ#ۣI$m$۷vv[=НzeTDgʼI]%nn?Sy#RMhRalSpD !I4@܀qDrd *,&z4@k0 Qvaz7?*=` D"Z)fgIC,<08]eӯs>RQ&W|ט3AFGg%3Lcɧeon!N[+J:uЧUNP෽W`k'qM!dPS`ɥc8T O9k9Zc &E%:L9 3d:c`%#:lL4(,,M+ \_~#K}&eq?әG3}1tw0r9u C?̇]~cO]yC }8a@ׅt*ϯ|+Dau'o@~I0o"Ĩq)\^۠k :US)[$9[7~k1af 56yV&((U0o;u9×8#ݥmhc.ܤ܂Bȥ"agB/v&sw=jsx゜ҏ Yk|jzmDG(_G?PP`ta}-K6SJc~[}b‚_tOA.I، {uww5w ^S0s [ה^rkَ{änb nͦzf_z5*0cl+7[Lx~ ~+ :ϞmcrN=["Z0b4Y]94 93^q+# J V0O0<뭄ΑgPȪm&iV͒l_}ɹ $giÊLSx 3.\3 __@QfNF/=)%d1{Ҝn3m3k&);M*.Xlr.. r›N S6ʚx>,x |GNPiX"@'[*_fIm(*58Oxk  ez2TbqaEŭQGUn|';d脉rro<š-edcy?yN}嶅C=E K:ͣ0y&<}k0pv??PoU8FӗFH3zQS7^CН˻ިs,(كw,0f6&7/yJ[Lx|Ĺ9pUtq:jQL&j"9ݘJ/WU\DjvҤ{Wha`]C!ʝjuD+<-jNR*1D*=8=4`, hr"I0IiKzto|Hxwr tͽ5$髾SKG0>BFXLPӽHt }s ; U/Y䃝t&yfHʞ|e/l_\8{٘izz邝Uހ{7PMJ;ۼ$Fp|]A dgM#m}ٿ6 H썱#_N4{՚94)LoEY߃SgtK֛֭;'{{<=eTM} MJ[Ux꫔㶽j-6BwIA/˔Y+TkM5EQw{+`9g~b "bϚNGI=gM-mP^Rɑ1ej^Hr4&!3518Xĥ! :HQIW*x$ۘ(\)lp0ܲ77ɼ=Eި#XCm_+s ˚hF;u*@y7Mc{SGm (j+M C7bz[{M6U~4}[{ M_ހeٖ[˼f, Bj7Nuûڏ4ve=U80mfn2oo ulgrRmZ):5f|3&I{j.Ue3}up᳏"x3}^\a9̞(]:|P|;oՀt> 1a,g9 َ~BSBZA.2 *s+R\>A#؃X /k}4@5I.iE6ԙˎ+p لs;kLz^8iSƭ;#;"FEFpqZP4"bmUZax~–ڸoE7~`ͷNJ"TV24d!6r$^DX2##I+@Àq:ڸ$Y,p@D"")EIMưej9꯺wJK~xb:;~,ifŬ G<7EP]Tw{Ox-zgZjțf]z`00/|#ZR.c$\@*}rYq#8ASڔ"$C̐Cb݅q. !wQs|OlNǏz]'emjqfAeC=P 3jM>I טąݝ#CN"LCJsWi\ Ë~I\u&]qb ^WΚsy0׫n\vXov˩^s2![֡đ7nG=S_bj[Q؍)٭PEA5%T1Drɭ:٤?f(; n)Cs"OIQ 灰ڄE#ڇqXehgp5c-Zރ{pXc1#T蕕X]%x^V;v&F& T5ɁO=Aw*oS%3c,ZFy+/}*`UL JI32YU "$E/πx!x:[aJ` K6lQh*pVӧz)@Ǜ`C#G_ٿ@9T!14 d5w%-AZ( uXv:Ԭcjˋc%ϭ1PbQX31jG*L,Qk6Pg` tQ7ޝm>qs\+~toC/`|k]bGESmB B{q)gv à`qbHCtL(:z*s":tm ֚9oJ^fvvzIu[ b4MxrNK(0ݍDhHKZG'2~2wKn0t"p@k1"S!D3%qE%G|3BֆqK(ƘR0XN#3WQ0>(g1ĀV$ƅqj܎S}bsTc>1wqxvNr?EǓe2yOل *.;#Μ `zFMupN'QKS8J` ZZEH[-NshO̥8xKOU#;UNu+;<[}uDYk45P ةyQ;-,KAG)g~u A[Stv&ZD'WIwhqz $9jg@hi̎^EĖJiT9j>v[> $hxyDOQqI e8`Qc{%)-p[!Z8BiwwfnF%vprFl=0g5Nc"5m|?IВsx-xK~[;Sl)dzrAO `jL(:OR ,G1:5ց{$:\DO:݊V'ځõl4\ ;$Schb O>?QK~0 _lN]s_EbدG!PQ~3 o_5W?Õv"j5X7!LϿfʩ\q|݃Z˱mkgxa$ 8qb]L -˵:AAe,5gN_X^i ٫o.Ը.}ܻ N.ø2|3pzD5UFzS\^'ow~_ f/jv:ow&эoMfjypX}^M;6Gt=Wy'ZggwujF R<OD;klI"\@\ZMyKZIgI>+=tlBj uE&1%҂AIL1-DŢKyٖenθgԞԷ}MZn,ļ/43GKj:l;[؂Č8^.FO "ZȏKP=Jx!EJH%rF;%z V(mMBҀtW"̺0鏏szR,tks(gt\fyb9پ:mvv+z=/<92䯭`g&Pdq1-E)kxN(ur+ 3%%`j'M`NG0).Zj%R|0J+*FL 3s%dT;AE̘иj̆|O 6N@ Ц[駧P ҁ*p HA9cNYD9#l\1"N&x@Dt:K뻪"R"-X 1R擄GyjJ(絲a L(M [lB$..0}_(.{D߫7^bmbI5S]u*5E.|Ԏ)i8:kv qPE4;-ka7 YY&R^?447z=jxUG~4~s.x|n}ۛh|]W+e<{7/.?Q5LU>D;"I5lHWOPC6~w +_`&1nm~p~}Hh;TKU ͪfoQ5!%{=m%RMQZ|q?]|v덛Bg-:*S_vxˀ,A+ ޸GԭaU2DNrhލX QUTq_|Ьgg~3 vԧ7q_%D)) w!d8[ȎO-:yߍ2iC"Xl;N LF)xNc ¹nfK/c|z]r$JAU`>y;X#8d'0;F`Y;Dwq\mKzx=<]{|"~&lqx|6B[ܷ #?MjpJ|b[egL.Դ 5K7m\; ǓLY@gYg>Z-L/.> 72,\.WOY|y FBchpL.|VDeAhH23LHĒuL #UG R%˅n'1%:4rrvr8|Es-1OFMj-baRLФDŵ2zu; mgomžn捅7;#nyRތg%nC F/*]ޝ&)Fs!2/[LF[5< QJX07HJsY:R-}\䝓@fOo |'hvbXaaދiTl,7zyҚPEN, YtB*$3.8RFJ#0)3 -e:/SH/YmJ;M &k$K'c=#sƐ#ܩ`Z8%. KYD X4r.!c-uI.%%lB1χY`MYr\MJIN0%Y;->cl[$wKP?zTܝ8S@*xJYf2.Myw߱G:{Tv^t鎯wy&s!)9*&J*s.,>$jĜr:΃;-pfsl,c»?BtE2a5je \g[th0[eQ`z*r 9P,pO{=Ch" uP7-OVeXUD𹨈 TI5UtҭtƬ)+|k{9Q*IM.--s~B8+&DE[FVFMuHolrϋ}98Hcܸ&5gY QK?U%oڤt} caSM^S5_*m4aTQ4ԝ,sGֻx{ovJ.\nm{W~xE/[omJ=.L9d@ Tk%*$!KtM$kmhb4B!6<|j$.KTuEv\=^ bMMܑcmǙ9b5 F\;/]t?ߝ̀kw'aRN)]{;UX9 t:o&κSu_~ڭR [j ʎӹKѐ΃& b է`KTK#w kOΌO^mpBN*o "M/河mɗjOğ;zM: +fcpEl㢁ze$#'YF1E5lR̮Ɠc(ׇd?ʐyH; "AK! Hpډd :9*ͪ:L&p@k.)|) ^HfGה"⽣lKM(N]:|,wC| e3S*WYw猝{=,r fUͿ_ :urV⠬v-~2.fw]n頎`4L:2q{WcYY>*HȳԈí- H+.Kgw=hGQ@kȇAmY6B+ C #'4(B⳰!TE4,@22{CUŞT~qIĖ&Ǔ :LqI4h/^aj&;  X7:p+eKzr ˗>mJs*12)fjwHih_Y>쓰4>5 쨢+͂H54L@.9>_~ˋ ([DZbgP)Q"*.,t2ٶ9չfvM]%$> n0J,{tbq@;r)yͧpQ3'wQ.cBG]Rֻ&Q&KA_|v^tȭ9MmPdߤ {ݾ^ \woxtVfF]#4Iwo.2!ф:V?tO[?H7'B9|/w rp!YhJFep 9c(B͕2!8]EB#R1Bn &0b)٣ (c1)I&ek啂b 'F9oSUn'Xo5>TGO$w|Njž}=}\3Cw^۟ݵ'Rt%v9}#F&8ɝ|\Xc.Hk8^(pUֹG8jlnW1́43Z{.mqœ(dš"ɧr (soV-8oڦ(iz|^(y_h3{: H- ^HMEhT$˯S$b[ż-GO~S -J/υyR6wӪѥh&l4њz66$"q*+S LJ%`#q19_ JLrO [ǘF8ޛ1أ؇&Mxy<?ZB%A`ZHk9U!)VJL:}g.8=D{]bj-½}bfc V㍷ՠ!}>֝VVbSw;H][Fo]J{Ƴ(yCNYKRFg9,ZgeN;|OsY;.?uQF۵'A= :asNڠ;Nt}z"ވRY{O_-|hāud5I:P˄Q_LB1-9R*>a{,SrQ "p֑4;Ry1/7LbkՎ"e\ŰLsjoбlp*Ųycsbu8s]3lEV =&*#Ò YkJ7G+۴ g4pa T#H/aA1E!FWGW-}NƷ:zo.1]ZU{[yMhD>\yGʷ>n=|47fh.Z %{6<ٴȽ|@] 6f&$C , DO󞕡́u6mt \Б6^Fdb:_Vls?TV캺/r)\*Bp(̋\$7X((`ԠǑzm\]r G// %QQ93!\ܪlU6fIb;Z$%\ )\>Bzm-ɱciޕ&r͆gxN䉆|C1K$3Aa Wk4irD 5α%ne4O5`2ΒKVxLŗ"WJġAT9 >O8ę#4 @L$06'4(-Ne<:1zUxW< mG>[DϾP$x,DKޘzš {l! FA[mcr6IL ٻ6n%WNF*?8NIUrR{^U*1Hd;[߷13$G/#j(Ҩl@nt7|E\Hgeqr9l$?˟?30xÊGWږ_ff]LUރ0 F0b2׬f Um.{Mlmmnuu+bIJ3 I s1q8vj0aҞ)>ٰX?eoTO|(+ Ǚ.g?o?&wVo`!ON֒M!@"vn5_~CӪYTMcfo.okڽ->-ħ|M,ہ7)oz7cQمd)fڍy/dI]|g_PW&(q*I(S.}B-ir6qqhmtA-QbfcaCiJml7Ǧ=g^R3b'`d+Ws%pP[|Ns;J,7~K*hB8a)I5 (FˢwyB[]w%./?#G{$ޠ7}B>Dr8N.:KdQr 9*{Քl9FbXVPJ:$LF+kFZQ=,~uJVikƦAtX9!yybmP˫><>-ƗiĦAe?F<:ɽE2gRxE3R[GQ$`Y t'M| kfqkVX;+ώ z- O65,},x~|@u30x1y<#s4"-,21 i4JNY R FFpQ{G^/.(ѹ$]/sxG !FZH\+nSUX +A80RZpLI#$ڴk9-"&\a L4-0rճ-5p25 H*E8DGIq8Q+Õ"EQ&kw=.b}bT֡NJלԷUѬA0)&#E7O~Y Odr)D$rj)^;-E0A{dEF,I $Вn}B( 0!!j 4Z'l021RgWHحhN'c=!N9z˔01p-P,R2|)dYԆvIZ'i=Uju]Me#FtӍY}3MWc/2lYwS DJ7m=Puyy?zZ 3<"iKO}b&Fx޹v<%t=<=m{gu*>J﬐T"ZFA@J!#V UVe5vb3u~8aį47~>:uvC|S oDiW^M(]m* j<,@'r5=qV yPJ:g50!|?Pʻ{J;zrF-. ?qI 'p'Z6nt)RT8puAR(Kev8(-)8[ӥ-msL?8M(w8 ̋ǁQ{J8XeXpH30L@^j5GkjUXai5 8 "Pm /]7"+\e9w/ljQJO~X(=É ^ M,3D#Dه9TΞUEPoR(Tk>J ˴ǒc#c<K e纵⺥ <yb)إ{}k&-{>0AO~g(nG`6*WG9䢅dP9\;$zu4%=ȳPfeK{Uw?I:S~^oZW1IQGf[SȀ.&somcs(]?қLT:/zӞD%]s O+t,mDԜh3 *\k,E(.݉[]<ϋ97vEEVr=堖|-DևK 㞛O/|6fk@Nn:9LLM(J2.xqs̩Td8(H(n1i7-, |t}T:0w`Scx[][!ZPbzRj`Ck3gвN.G.  f''Ku'kL|saZMW6n#NKjfsn'p ԜȖV6t\*AiCrK!N÷\b)lenʂ.00‹=P--}7KǺ`4]W"zͬ *vf6We|*OdRIŜǾ0wVŝy]SX'"ڰN:kW{Y <)+j8Ox[ }r4?p#[;bk AqY2s [9݉)hH]UVdǧKW~FRNs3-C y'u{%wIOb -g?T.y6ԝ=ܞLY#ev-kv{4<|nUnOq<5aF# N焠2*&sVZ q+RuLD\A:va V%zsW;Lڽc y'37K~ ! rˆ߅XΎ43VJjv[ 0/? z_ȼ- VwF% Ή&g<ܪr0kN05mf6yOy?-Ý2)IDK6SYC0}󞛜6j>s1~;%Fr&Mybebs 4ղm¢`;xgwsg9 OR~,Z_+Cn&wbY]4X,pll*! |{~o<`jU n(fsf G>'ƃ#x{CEFrwoh"Eݕ-Qn<)z-TS_^ulI>P),R}0c,,3GlAGcTac Ø@lLY<@祴QM)R @I#qjkَS$o=uIe]M'a;>U3 |2Kg)̷'Dyttk{Íf9K—qnMMe>ڲ&tȻ}AI0!LGBQQQh.%b"l1`8G@J([Փ F1̈pQwZL*P|;`cpT[NC}@۪|^Z JP5rLCX|5-#^X"6^["x\l@%9 DԱqmjNO6:B/јkQNbFSZQV5r\Y-O-k~)3 kCC:Α%T3A)sYP%!jӭǤ[SP0tlcO^ nOqTHCUC&r;Tg|>UTTwЇYՃwllkip7z'"pHqFgsX8*DOFN7?N5wKU4=oŵ˰<Ń4UgNULFkSQ`L^9Jy+FGeýνfSY27\^.,NAؐMiQ2Rٌ2 Y/u|7_5~6yZ41?-kN4խ@.|B-aE`e9 gwDSd#ՃC9lw0Yʉ4颞A 6OUA;1Lv@IA E+O& tWz*fH:BrR/"ͼH3S-xۊ?qj8@٥T:gB:s*pBg~u0a>]>ƓÒ 3)l^F:D S`N  -&c`d` 0%oJF٥+א.53,ͣq6MʤUy8 ;`ZQJn}v"uZM$]rԨ[_0V./|Ӭɿڢt:bu8s͉!+ܯΪo'N'djZ!+֮ ҙ#!y8QnA0{kdDY#σ.gQJ) NO5ϓem<)FЧPTYa$fL H0 5$RypP&epz̈!G"AfЯv6n>rCZ%ҝ|3QEw~tpn%R5|0]5q7LKGü&;OwjNFcP͐& LҠ5[B6{7ӬU6:{ȺVk xQ 3 I s)~0tjF fi ٠ ]?eoT[YY_\w]>c`E=8$.Gwt@"k??iVެiX1nޤ]#״{_|$[OaqX}˻7*3Eܦs٥eiz E/>@ (FVQ^IBѼW*Dڿ^W5; JDk#}XPֲHo#EWAC&=;aෛǦ.@ fDBq kfJk43 #4,MbYe(hScPfkG~2xxpL<SB=WK[Q[F>"R) D*Dwx,%:I5rqsq:1VxbD̵)JS̍t%G&QJk)iD6q2maJT"ZFA@J!#V Ur:vb3:?a. ۛAPnY޼C|m 22@i5(U ؽ)\h)J[KzrF-. {Q|aN]0@N}yqkqǵ&BS152RuB,#'AiIZj1z\=zn ˣ^`Ү'āQ{JHB[f6gP9 X+mT%p{Fd )Eژ^OW#)mnyD-?-bd7oߍ|>p/UW8~r>tzr1e_^y7N.&׼k{9}":pԐ%lThC8"jk,yݦUŨ9>]O `t~ <ppzvFhV,&o rEnDHYfLɲ (RubudTVdW[O,x:6fq:>=[wqauΫ/SMtak=4kX6`7i7pBR&wB54Vq|-oF´U^aV?Ȫ;W,_?j|[i? uKTWP\4^..{59`g+`v k0_lR=cMԡz>NuK6f@ ٓM'| | (O(> n9|g<\ "5ޖsHs̠G,B>!=>.t:hMllO|`LezhmE~pN>-Ԥb2wy;NT隳BN+jGnC˸k3' (OE͆_x{Z8amow0FPe b> mk\Vh O9Craz#Y"1Lh&}${`Nɍ<4Y10$IO5ZCZ*~~{0,C|f޺RȮm7?\nnV6mw2}ٿG2IbxfJ+a,Ǩ4ŒV)$$iy/΢joՃ`}|sc A ZpDzB[lg!a^fMJ{ OUܓv-$l;-sBA%)9Yx \"X(Ѫ0f%@!X\-]6D&)8"5Ē^[A% E-{g3kSq}뻿0v\CW۫hG}7?̙ UzBml$ԭb{ĤUR9&,,;+)-M )1T 4P/jI1`oxK@<ҞV=g=l(Ȋ|ow)[ 0#+)*!Q:Jő+K HF0j^׵l-iB Th"(Br`՛8[~׭ԾA25c#G # na&"LaIH*\ˀ))<\T*Tɀ$[lL`J #<">| aC||:pmJ$YZs]<8n,sI"~=MMWA~\X}%-Pv. BE s$AˎX({{ޮ*${Ӷ糓={IJAf4秝U7z\:SHQ)YdY)IlqG)!\ z\l ▽A\oڷ#]OX=f,%#TFg!pጢij.b2ђM4D qi=qMv3F>CY%qwo4S);)=男 /S9rcvYz1rBjLaIuEPc T4hHKEP&5z;ĝVUfzyſ\'9DoފUnUb#"Y1tF)+"& b8&!\MŷnP!.:2t:{GW{œ4/ y$Gy2:y|Qsj4 U5뤃xU\|EVEu]rז;99x."C6_CV~6Ҋ}Zrr|J/{yr9RSB6(6J>Go+(O=t7>*%YSzF#ʧOC^X*lkAQ&E3]LKgiQY_ zFjd ¨L/}b{y)8ơG` ,PMӽDp|?~j.4W*`f=Ls ]"Pw~8ϩɋ7 ^eKcp`N`pzd@S7~&ԛ܅\ 尼\i|v9|NʼnQ FWf~qhs20Z\MɇˉZ~::O'6 F{nh(y>susWS.{jm}?9^z:=lNG-iˋ?wzŵ"q@pWU\RWUZzpN\ZMx8쪊 bJIlgWF `ઊ+١U̾UR!\YAZ\A`'⊃*{R•LCZjUWC*vWUJgz2pez ${t]M\-&WwrWN= s0pq-{\AZܾTZ pJ3qw>I '.ܢԶ |^afs9F~Lrg4''q*GG9~nږN*[TL4uOϥٕAZq<<t4>pڅ吏gjQVw}Q}Cz`؋{)aw{)oPtKܦM )A?ᐿ*Bz7)3$‘+W\=Z!˒ 6*g.9|ji=3}9:>>O $ڦB-:tցm)ŬmZт#<7:Myg`?wy>Ͼw]lqg+ej.VnUäo)QPu.nvtK]h4#U\ԺZ0x ;rP`+)o{aTnZaQ6~צqk瑇iz K"p#kx~&&ZϬek͓#&q,AKCUê¥IY/uw /.-`*1.,krkÝ ͥ%]U#cvXR¼Er.:80Ghf&2EϲLOf$MXzFE/uV.*(JqJA0#e*EM rt6`HeZuQӳq $s!òf|ƼsaK(,m5DU*J)K1x pLHN};pmW8&}"lu(S(# ᧟1pwx1m/Ԟ1# M ;d4'apetP#Q0`0w2&5n(T!J &Au}NL=D08U76>' 6UOVZ c9F ^,heHE2 #8CaJ+z'!%&2S q).D5X`*`WJ41P`:G 7Ft kj7׽l)'!tNDJG9|cEoNuA t} kߠ ^ޛ4: LL&>Q-.adE{0IoKW2*pBL+TF,q2E0 9[tZȆcOav?@F_7Rn6kR$/[U 40e{Y ኑ<4e,A:^a4vX5TtJ8ԅ':EdJ#z%<L:tTPMSpg MVׂhp_E3,w o ӢWcJVFԯY֔HZfpKTë/%08(c),ւ} ]WJ#䐆 XN'!@r 9Iw{~:}6\',\l]8AdƁ&Yv_kɰI$TƒQo%T:B҆zgCЋɬ؏H"2 c J ÅTR8q&;1޷ pppfSR6z61` :JP' @qVƤ;AVWUJ0&)о+S| V.y,Y_?HRaږ#Te읡#kDL-U*= VIDu]'[Ry$i*EINpA`U +䤡\*0]+dd4Q!JYJ$s%@X{`D7;^r;X1I YkAErumS赳ɖt5JȃDi)C—iz|fZa1C YJk مWBz ȁJI{P5r;XBI1%Xyܡ1mxH."GC>  ( lĶ@qR=F$i ԬN$22Ȗ hT eGcjkxzB=z;)xg[Nm+LL4V`Af@J %4%UU5QQf+ HŰkcd]#]l@^M' R$*56(l/!:UT 4~Fvf_0?蘦IK-o5h6UoP""$(YC@*JTvۇ[)%(iyUFv *S}<(Kcu| / El.AD'U(_KAI0+0. 92Ip :"̫@탂('k]%Km1J$êjiΎW`E 5͚:A2FZW%`oR%da eʊqSxmP BAh"J"`D2Q(Ȁ1! @wfS#Q\d-5SLk$jb * /BcA~ہ!7jVz}\nQy}ZmQ::OQ V @A"*GG h #h5YURJ"eEOn*dtE.!^QM.h B9~.@yuR/F\E"WS R*D7 $.r6)+Ud4>b9AiSm+0ײ&_k!Y^bMy'Jom,t\j@^[ vCL]֢!ncĉޠk*SFl&J&n)(gb!Ut䤊I&5[ MEyS|^+DSW`4 chZ`nB %Hah #YJ탞TB*$1-ԭmDHZG"F8GZ J7"*6AmPI-Ufe:_pӢ[5@i!ACU!rb554 B4($i7LLAXA^:H4St9K4iVڪVQ;KIF$PC*!a+#`p)j@ 9)Lq{`)=P~׋vBlWqwL ` .Ѫj' ⣺n==g/׽#?^oY>]>B"3?]A1/q=p׃`(l:?rmbN:{FNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a|::D`f\uՄ6;utK5e;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvSgwN'@99uɜ:#ƩC\6@ϬN=rx1:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةN:uhZ,+qHܐShS(md>:ua;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvS:a;uةNvȩw<ѭ8xTn{o~F3LOp΄"Ѓ7#ch% cѝ713+!^A4.Z (pLW{HWCߎe P?ӎՒ@)wC//Vߓ4]CW(ʺFo_ B.WW۔RHI_/rCa(/Ӯ[lՐTZtQo7`ȋЯnCYBo4=-ώޖұ]YUٖE]K3`?N{I*6ݥmKKm/o]}z?t/V[ @[kuAJ7WXK_') Nm \I$eQV .FS)rNKŇݿmlO@Es{*MumOW'O }yxtW>xXUÅ=8:̓+AKN<5*hod9ix&-\F g<'qݔeo>z'Lǯ;UeM`D/Y&<[)._U614GM=z#\M.~Xui JNUehU^ş/x_9CA[1͕*HIMZÿgדͽո1lظ5 xC tXPQoHf1Q⇟1!7nGߛazĠhY)l a{)8/#8' 4ˡVUtI򬉴eyJ9DZ!1]ߑCk1齘><tބ/Ҋn;e)_2|MyDj{_:ʼT:ؕwh:?òuygޙFSdhcv!ec}hxBy>n S~xK:uc^g bl,†T(~TBҪiZw/-xkˮb6ҩ0t)TRpl{Kb> Y[SߍRS+aX]I9@]!Zf~yZSVR7BlЇI}MMEo+kź\6~\"6 ͂jcQj^=`ӗ-V0tЭcLP ˚8b~::pLĄAHz7_ VG{Md2RJ +3tׄ'%" ($s;E'93u$մ o:u 2];gt|sÿm*k_6[mwśa|:jK A6--ujDK(gC5* 2+ZeCWw;Gơu_ )M+UT!#"!uEpʅ"חE|ꮛeGr)3+u#BW@;]JɳnHW^Uѕ7BX ]ѡZBWWW@ph(վsJXJ ]\#s+BgetLW{HW8grj`ٗKpM6+֛ t1"QBW@+;]ʙtʉR/c(!7 r}Q\ύ'x`ޭiiiNhܜ/bizu#\Vt?pGn_X.hR){(&ڌ uLNW@陮 UFtEwu$r+)]J^ KNJ";CkWLW{HWΩrJ†llJ1}.hpf'r2teDWl ]m"}9Mhe: ]9ǡ J0]}6t%69,䰄ΞsLW{HW:(#\FtEʆ5ѡ#QVWHWhwlFtEȆ7lABgB@(bC 3+V2";q$Z'NW=+?m.܌]m$ޭLWCW>jIs g: ]NW@i'HW!:sRW,?dTtEpI b J4N]Ȇn}#[OW@`lJ lzu(bS]] к:eR#J1]= νue."Q]%] BDvsØF",e8(8cy6GpC6*WRoşN*]2"ZBW #] ɆVrtE(aCRɐO!W+kQWֺtjÏh ]\#r+Bkg JjQuFsWW\٫+4ALWCWA cFt> nP$~[9+VVeCWeІ٫+ԂχׇBycwjȻqpݎ=B:q(VLWOmz2#`#?wJ+ҕ[]):eCW7fCW@;]JV"+`|6tEp˅=]虮6/1zuzZvFP+<T_^_/OWN.QW`p}ՏgR5, SSɢLQQUӴǃǿ]d8<^z_÷Yyb T__eg XYac!+ #/JBE*˶7sK54&cK)sol}Xd]mt~~,|֩Fo<b-<:T2Bd+?%B[1mKl,ph =xMwjޖ:q ;6k%Eyv\u8}AĿ:rfAw{ۃ$o;lg2bϗmEݐj-︿CI 6{)| vh>\<-2d1~{݆xSgx׫˓fߗg'~XUWۋ|p7kXyNV˄5@DM@v)FUDbrAg62urdFҌFl8@ڳc jl˕lK{G-!ll)?[3{sT訞n jk!Sʄ^8D(K!_VUP?G\"O,UB!~IDMvӘ,#9|;5tuQijȩөv3RSkt }!>πxL ˥<8 m?'ǹ@oHS8]dDŽgXy;6:OjK7~]Y+*,5>F d<ḮR#.vbo ρV؇B+]QJ_3I6$H%&̥LVӻCފǗRX(w:UFpˁ ܡv+Ýpy(Xx fn7mLSL[%S],}Ú[0i]h$93rtm/(N{@[v<6E[+7!QgE"y(֝o,lEz/7=QYX[r: ご^wV/_w+͎Lgw5]W#~^].pDŽh]0tk.Hn~(]}ǽl}wCg\/mɒ0YHٷXfGtF<sqV10%]VƦ3ɳ+csm;eQff},"\feV=}%KϪ6qV=D}W׬whYDp㯋6tpQ >tflzs?wUtUbpZ 9,jZjlSsc<͸5$U.[ij]Uyg3-Kx͇aSjhĴ@ӉLC E6[-aq.խ8Ы!jg:5Fm .ܸ Ԭ:]!J7tteڦaՉi+ke+DYCWkHW͖;r^?5\nօ]D)u+fی5+u6tp{V]+D)̆֐-Zyت]!\^yvUŠM0t1]!`kCWשMߕ-LMhj}Jfu+3ڵ cU%ӛu+G̨]!`n׆.}+D4Y[wi~;bW]kZ`|DP b5tUq-M۲rW_`n:N]D|;`UK;h bR6"`j"UO/|-;ҠSN!p`2jCWՆmVe_4t2 Ft..wBWvUPd6t>ttJ-Ft%̰(BFmFŪNW7:׭*Q0 "\Ǯ ]Zè|0(s1Zuhc׆.V] њv Q2ҕ[wjDWxk zmY`Q6ttPT`ev1KX;(r֬VDɬ36=obU`NW=2X m5W4JVټ]6=5+µi] ѮjP![;r/T\9V3|ƕ$\4'$B,rFBA9Fm?k8]?@ɫ6qsZ#eN] ^uF%]YY#B ]!\V5Dͪu+fpEm \YHzADIZGe:ͳևROeW%sZCYN vXm 6]`U+Di ]!] ӤVFcՆ5* "J ב6>]!`VBܮ ]!ZQyBmꛡ+{Ʀۺ)tc}WU*YqW{!hdJb+te7tU驥^#8iQtpet(:ׅaW"N${ƥ$`뛥$5:,P+شr*Ҳ&74VsPq UHu֪a?׫7`)zVZtk^wK^[译{iM`$Dj<ߌvnu j/[5\3}C#p!.RzyN;? J%l;Q lw%DaM \;9;؎Ow'"6x' Ws UOWÝ/wwɭQo&yP3g3}eawCu#ydדrܙơL?&[oQCr*Wa;]7ak WQwԓW7)d߽RDyg'("p Q6P!\&Oh7cB>.o#pD~LBw_/^x}ü| < . nJDx.}xjyK!FIw_i:mZFe~*ρ| FY;;H-ɿNS~9)r%Z_Փ&4|SR9|R>Yc˘д),^g2ux򓠽/$90d( r/)A(#`ۑfA$tcG;TAcak-Vĸ;DӰ\o!xht%LɌ Rm<4J#^.7p໱d6~q:'q:M{swܩ`-ϷZ쾁*|ϲڱyL1<bԱߺ!pVZ`=p#&R'0f~;U/ Nh%tAYr{*r_T&BU=Nr:k{0`e}5X*ޝgT$(.|hѨۥx1ʂF8uZ,OAS ((5KmmM{_i^v7}رr0"WWh5 ry!}FOMŧ C7tD>k!U {W7>fcZO,et۰:ٌn= W; -wUY9ҩ hcᨿzh8)^x i%,n7dNkT%8 2,j^#EKoǠkk6clP_^gn?'O8&ujkq@mq,gu!F܁oedv&a[m'5/'w_Uo-߷ @reg1O0ae4g3+Vh,z; {Aol(v89ɏf]JY?+/ҥ/7nડ4߷zrB.~RWFa~A:G53Хf<|ʹ Oy|#Y2~!y5I$K|VYH7q

9S/><a׃MQ/Ch,+~}v9(oe[$2TDQ:d0;1KMBwn?x~M{n﹏{3 X+TF?3u4r&7F;a uˢE枾hr=i<|>)PE" 4 ;l#8δN,Yǔ`)m=ȭCel utm'q&ν^{)ӽ Z#h<n歷w9t٦x5tgZ"hպۦ^wwwfPݣ畖d<=gg췵t o !uvܴ7Qu8j~-]˥xE}պ6hճKomn|9g|x!N7Ƿq:?%L䤄Ynp5vҶ{b)ԳW6zh__gWv6T9wlz<˟LHWXW-0+XVqʍpҊۨ1XML\X{%)-p[0PJKMVU@6>s޿͓xcp%w[#Zb^kVV[jt<[= 'VU.U2;$ f̗ ˺Jh\`UxfeY늩gI﫽JJw A*4m\?Bڶenn 6+=)f|B.c6n`FS~-3Uxq!cx}zŽa~ o[֙mu][[L^n{8L.;6awae:_&[`VWǞNt.vReQTXBsn5'B'F߀\|k5oӳgLOYvN4\]Ȼ]ȩ`nJ.Xd'(%Mp*K 57*"CCQ$N}:WW)H^o:OLuut] z 롢> {3hv)OVOǷ[c8LdbT q^Jk0XcXrFN0D`ԗoydW[/6F5@:)F`{Bk~_'[IxԌa[8k~~}+??oR!\{)R҃2ǚLⓆ~_6Z;~(]5&gmI4, ('V_:5 ?07i޵l.4<.desr2t{ԿڦN۹ OrZ-/[ G'-=ׇ?wδŋZ̋7gf7t `7ټ#]X>b@1o6't^О;40-.Q^Z s(Ӄm+l$0 A0 Q0fu~!!H8cQ)I@=/Zhb%pU28ȏ+ZD+ ˊ݄eSB^zC2$. D+(X8GoVz_IcJ%I)Sc b$96&*±0XΞ`Հe+c-$ [=ǹ$/$LpbBIT%(0#@ c)ŘDH ǂt>S0RFn+ )|aCl.>pu@ hR[L2Dil mn*f7sA|Yub]x:%Kԕ:ţ1=.8qfq ;04pܫE/8J$YlJO0f`ˮnv|#guoZ>%2ed hdQll px%yT>:UtMWH+ cDi7Z[bRZ,@"G¶ sZ{ΘY0;GQ IrMQE+H/r(0FzX余fUAh&0== Q ,$>9[o^@eN Ho 83wG:hM)J2 8$\551$o.DŽ+~Or V M gp+@ MОQkd1hƄ%.$Ƿ֨a -7__]\&bAk ]`f~0Xt fMFQcB{XЫxa今6"sP3/[-jr/-vҊJi鋖[,6XU'y ,z28(򔴍mb5]сm$5_S(ՄE0/5cыd,{ '|l7D: ~ީTue 1#T蕕X]%xM/T+ ;#jw*D‹TD_GVU"WNCf̌.B[-n CNo,ym?~w DI1Q`U8`FP\:UQ ۳y ^T^/fI 5FkPF'"zh,ɠVdd[%lْW3Z#_kF",dkNQL{J<[}'qLii" jZQ"@))m 3=j,o.S-2oJbQ#8RaeČ2X[:c+E]yfvE}ц?ܥ_ Vݦźns 蓱Rdbo- `qbHCtL˥(:z9XHjva{#kCVp2v{ݮݾ<\/dS   r)ux!ӝ(3I%97tRRZD2Ȕ$:l&BLjtj4w%Ōln 1L4H.j[(J9ƙa'%10B5&qaZl8yD㫡<~/i>,rx-<ﳉϯo=jkO$ih=8,XJI%X9gx59%WG`-X! 8m("YbVfTJ7*\ґǂ lB#IZ|yEIYQuCZ2c  OO\bD}! !L"`"2+(p+r ЂDY} }!Ig@:?E%ƚ,[cCaByE8\|;aFhtiH#v(~)z_+8A(V#1xē)QNF͵SƓ7Z%&o'\hGzm좩*sn5}qr˷M*t ey_i@Sn~ ]_|HI)S0htBe"Id K둳"o\PK84U9J_X'#QN&R`aCeQ`8"DZu"08FŖXt0++,qŽψr:i3\y/q}}ޭfGF;mi{hĺO>]7[n1S(">gZOK$C+x'H{AGE\JjWPh#G`)cmv+oQLL"PO7NjϮUU{S7jgNeN_]]Kz~b0O8p<^G*Do~غ Ә115&wuotguN{я'!OPYԼdzy'&¥7y6`I>OcӛI|p|_6Z;( ^q>3fdhY{0Q&^B807޵\hˁSj[Smm>iyPpn4RƉA Fwai~dž> h|n6_xQ|fv3M`O Zfbɵ;v3 p~qc|3Y5;lN-ǿz0);kE-埥J($ SMZdS$ @'8OjgZxH_iEW3+nVb٠!!2N }clm~Ou< 63.W}uW^o)dDLX(y(bD rz0V#IpJaO`m%L mAlD3C<:l?N&qK01t) sӞ6}#<`Apc=ac {eRH8~AC0oFIÃ}ЫrH@#&}t+d],#/k;f M˘N8 PMp_NOA u 31JD^Q qXhu\%< ,E,d>-a4.`PiHGŜ5\riSd`*b11gir}BEϬ /d1Ư UiK9m0NȎҲќ6zYY?n9V YU+-6YM(^=-ZF& i{zY[b X¤KJ:pm\e=/wU<@#/m_ Esve,ik4Diu;wkHߛ ϡ*;,GIǀ˦bT;-L ߲ —pM[u@&v+is}=\hN1fR6nDP`?S#b{ #tm9]Q:g_M[{՛w勍q`stp91gE36JT%;x *|+Oo@|ȣ7ؒqŐ(:QUهCUH0I1Jf<2w[4[*AگR֕ 1k7 sq)k0VҞ^Y:qL5%S:ʬhV3WAvV{w[_7m7@/ձԃ!=DSfU4/*6Aϲֵ')ϲcʽ}$vrťp1@ Fӿ)UI;]ʎM7Kj]' Kt`'3گ.(QJ܉/)D|^$NNnNfQlLVފ1c(5|i vٔp9̙H1`m2RVD/JNƸ4y6`%BMyK*hB8a4?kPEFYskwO~-zkxr,z381~8N.:KdQr 9*Ra) r,0qi{nXgUiۥU=4ٚ02G*[Ole: 9 .*ƍgTli^UWLjM#w͢Ϳ}k,$2&ɜIsKHmB,/uHhmQӧ91 %>F49 Er㖅2MA.WOv:Myx/ԇ3(1%s4"-,21 i4JNY R FFȣٹ/UOwJz-εf3q:l:1 z*1u%G&QJk)iDYal9`-y?fGz]P^yюeҲw1 _w 3i()@ЯCp`ya Bĝ{s?nC5a Ju띦51LW]kt Jo[r@A`5uGG0^;-E0A{dE A#K[㤚K+s%,G98NE@TpdSPrGV6J k=|3î,ز@}|KO8HS-S@1H-^ ʼÒs: 3ȢY1˞i˞=m{#tWP/5qisUbӫi/tY yr`-QH ̏TW';kɿ gSEnf\;Wq_X X8uyy{OR("꜅/Tk} Y! 2DH1 CG@ YXۉGKsFo/ 5g|>70U>gb^6W((eLq,x¸Op^W &jP _qq5>tr]%D- ]%;]%|%ҕDS@tK0tઅ+)Bb*bIW/$gXWEVy+@ђ^"]).f DWi),d0 &/ez*jIW/4W`,]`NU*e|*=+5aӫU$st5\C+t(5/RSЕZ} DаO{E P6{5Mo[g#4A*/ѫ F4jqn'BNBSn PW31n] ERjaJy*KэR4+YKn÷o_EwT~P}O|~`XUT9312Qū( 5[TvPv["w^?HMaTӇmQ c9QGCR+*CPBK9Z㯄Kh%w/Txiژ,xy*Ly҃Oc3l/>vx[Nןf.Y=<9Oo6~8xի%1r{Lkv7 gPi6jkees H>謯|luzPσI.l5Swӹ)n֎:uf{ok`w+(c7բsKmu,+e! 49ߙo~u6qwR0FВUhU`tejf:T%/Ehڵ\]ZۿWֿ]mƁT& '䖀UWZޛol 2Ui "bfb'b)3aMIˏvܹkTxOPjfỼ:fOF׷V/- x?*z׆OW\lE`f?vE#Rv[;)n*R?mnt?>{% o!ʖ+_m$"4,3*4HZ{폢[ߺ0.)K+>UFgW{ Z9ϻM? S }9*Y5F t2AɊMR V+qK IZkc)qgUm2TR7 QƪC"]0G{eL?OB|,L)Ecw4Lb|M~p )^3D*h1GU5 ZTr+zޫ!!%x}3A%DL8*L^F:D |F)K0RqlI!18H(n1!b0GTqv3~$N&E9X>P PV'/>hg-faӜYdHQZY0Q'10wF^ۀT,xETjОHa $(zrs mGXa'ҍ\1KxhT1)ȴZ''M0ƴ BGAi1>HAz5a9AAy$t$w 㠴c[$> (p Q(SK=tzE,SeҴjd'Xk=Z "Ah1AP Hw@0q9O:"" L w86`px`` !uPvͥ4xb:_B޽D)9˴HJ6j-PJ=QT1əh,p0քC%RF%>+b1POS-B`B<RGRÀYfQ.afin'KspZ vH@cc<'IELc`f`ĢDFkmȲ@UŝBDY$ zZiRCR #Pލ9qⰛUqnuRI n B9eOŭC`*`SJD8P$ )X& < k"[oL3!OcTf-Xfyiv q#]` l2Gc{tk" )LՌ}4Z/Z( .کYmɹJ p& >F;Q6KiLL;ek= dC"1%XRt;@FJ,^­w͍) \*/#o5v$ (۩QfW)ˈd Qmר`f::-xJ“Nk2JCL(:]QjW) 3&kjAUp_EA3,lkD[myCYS )|g76O[= VkeJu 6 5%FfpPK@W?KSp8H1`9Е*2^R(9a,$@r s^@k^qE)mKcAoolVFdp18+DM `Ȣŀ6 eZEꀏgu0f|gkEU"`^Cold&{B4 -AK+22{X^ؠHFnBV봝]Hf$B+C0d )ĕ^ ,Oml:c{6;   a漫 DLs4@g%DD@qVԬYS9LS֫+9@盤A"XQ1ΙAJ gKC)v )ÀS&,:ox(h JlY_VJyԔ Q"c{ xQqZTs$56j,z N kV8ڔ3Ybρ>ڦJJuVHȠi=Xx%Je5#BaBAA Q9.|g]m@Q;$KQ/0[ @QY\97z \  FiEM;& Ԕ!Dӊ:\'JD'2t]]Ew Lܶ0rEpZ`t-zzNe[C J-ZG'JA逄k#%Qtq$锡|&+$ z $bCNof}R`)x`Ml8kr)elQ P]^1!ɢs6icV~!!攌TH @s Kê: h.Э Ѵl4EePV/AJV ##rT5.rG`BH%T ҭ5h ^v|M¸Ƞ}r@(X'E)=+E@JNRahj:2>hnyaB΅I +و5AĜ!Yփerm#D) 8&J,oD#.I),>49!xM ݦ  q-tp:R_ZJ'18c.ZdPap "\=s{X.RH"}}HNY#Ԭ@SߴX)A ,*JnT` TPΈh4SiL,p*p% hC@dWՁHN+ dنW,'4 co"s2@B5q}/wuA^R)ԏ,j ҈Eۤ`5fd=`h=-@4% `"(6D)YS*QXHP`t%C}=9|3/6#?]azy=S"^xlIu,NxmIùlP=T6F zPrZCUVHEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=ԗwm }ufcP ?TҒKCIڔP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(C}z(i(=T;}lsPAVo ( =ԗRHEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CPPx$=9z(ncPZH|z(P_䤇"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CP"=HEz(CC#0WIp[/z]/) U$2R9-zc[u[@yF$o* RVٔp=\URÕ5%+Ƅ WM W{(p+WlZl۸5S V1P:A_'\zyՃs+p ɯbZi$v |fcWQخ wb4.Y0}Ŵ>=y9IPˇ!-{ ~c+tW0]eu)ujsI?z|7Fd:_|fs| Eu}3 g+'񛕡'՛]\`kڀF/a=ӽ|xs}LH}~<(\)dǁU&ֶ}Lw>ZNW=|ۗ_^OUvyߏ3hWF9ZKR6wv<3{>֏{< Eysu՛7j vNwgON φNe\NFh8kgIbywd4j/rsޝ)zt`pu;zy ,6Fys|}~qnb:9{dž' ~[wG~q>x>Tq9[zzIp;;޴?\:Ww*^\zWWMo8q8K?>[yQ?^VkZ@)~2};v~Wg˼hl^O}?G_,+ ڀwt2߬M/6-`Ny92Jt~.qH=枋|:qDr//փZ7k"Q8UE-fk;'@kTr|˭<{ SWzLVK5-n돧ϗr<>G` Z&F׵v\ FcpX8[lmd?\:h[⿜̟ntqs{p|x%Ƿٞbdvo~M]8}yjZ^bg7ͽr$OOm@ls oYw,e8t?}>?nes[^n_U4Xw͡[Nθ ;[J߉I2gﲍke+atEs_Ŕ`gl[UAojo-pɖV3ߖk͓WST5Ji1⢗'3_` r%Ҽ5@?mITSwNMy{͇: ^ڃcW =?è~/[l-ֻ;#| ϧRi7Jh>%G]he}Dʱ}9uXVѬ7|u%_![tE8Nwڣm_Ýٳ{ߟ~;<;:>|M?VVυi| R_*7C%dVb56ښdsRuUTGzngp?upT;;8=:ޣ-}woYeCڮzzTNtɶ Aj[ȱ<*%LI۪yb`SS\G!ӓH}vd)rc"srI1{M2b6hgTВ}4)0 -hj} "qJy@~݆^34"xͻ\iw+ŢӋ>V-LzI$95w7VC`2 )( @mIXgg5dWp"kV,#gA,l8!I u ˤ0i-SQTKbldfI JXjo+ab1뎧X*U`,W]qţn*`kUR0RI9#}&z V${6<MGAЪJ&wɉ^9 8ewuݠU n4^G8iNvuSDrTմ`z2LP0,Iu<0>tU.Ε7Q\% ӟN +"UP*{P4D66I+w;{"Gnk#>d&p=\PJ"Pu6Ń7oG1e`"&}ݎw\qNn&y+ P fuנWA 'FhOUPd/^ 3~UPMK(f qOEeL@:c*=} <|:zt4iҗu ns?NGas$/k4:Gꠟ:n7U3 si)莖 }@0y0 B_!F#]b{ѽQ)寸$9.P05J;'oOjNJ C{g#>.joj+t7qjXw7 /$WVT Y>'o%<[sxpZ_f|-_I"vyHoχ''A]FE ̊~J#IQQ$uQ+B3>wyGp={c˩ae!Q54a~k˥p *P<5ȵS9ҦYpE#К;䗯!#w6]vɷFkd+W=4 6W(2W\>,VWLr_.^$QqlgJՉ5aŽq׮#;0?s?|C(_^r\.#G o+N^:$'FP")|VzMvF쌾e73M} #JY5Gq'p?uK^ǽ`y\wjp/FEsO$ //lr!D~&!;/L`Iޞ1I?,1SdO O0 3 x)ƻZo{Nag1&v㼛_Jl;s:#&u{ OړhҨeS[}j L\`aK#  O~RX/ 'zRZR)M TJ&)=W(}Ӊ'm@A0M^ }L躑аx +bÞ{f#Jq-9l'; 7 s "|hv:zPj_gC!|isXa}cVk2X[3c}2y:t֦$-G' `dy9Y,' JC/N“И#] .BW<]!ʑŢT© &vg*兮-ӳNWR3 BblJj 3=7tp5-/thW=y+D9r^AWCWk+l15IFn|WVY'3=]!J8sJ E5sDWXUrCWe^BRҕai 9+l+yT jʒ$4rDW UY^ f_BY?4T$06ZḫFjAfQZdt[U˒Ƭ<֘B+DkY+@+jAWHWdHk4Q$NB9胸ĵu+CCE E- 0DL?& @]m}Y<ICjcYnWm^,4i {*wLߤVT0R|ߴښN#˙8WrдQ8#dULݦLm JԴMݲc2&2=Hnf$g]¬~b N#z "`Nx؝F:ȋ??$.(!4~ѿk7z 7B(_zuop'm^]q8.` Lk]>o`a$2|sŪSỊ5o"sXrɹG4{"SfZP.wUtϣpiMS鱢N] ~f+ʆx\񙢚ܴMSUMLQtW5h*I6g:/MPxڞۤ%I2&F#r`eV?V5C6:WKH~X;ˊY{~~|z_ed!Uӓ@F9?]O+;l&J{Mk9jrܽF}U -ظZ 4 (eo;8̮/'<.\t`6Zؿnï ͛о8a߈K7b&Azƹ6"Z'<+|gc$u2nB5͊j몵\O !k'k lʌ^=Yy)jZb˹M"%6ZN >6YLЊ-MV%X [ZnJ̼UT_HWLVt&刮ˢ+5Ԭ44ٔ4+Gt-5?U5ْ*j JWMsDWx ՗EWWsѪ7tte(22^Ue,t( J2C12f兮ycP\ҕXi戮,U֐vYtpBuBFdCW򜢗MS6~buiZ aWBh iCb( Е\UR*<U6wM]-]m#]iu `KޣշR>*&+6D@57 EͯWMTLdY)6X'* 4$IaMM.T%MtTT+du2$`Dۖ5_xK۷kcbLN#eE5e}cymz1xA!_pxhKx[C63`xoxjgϔ8eDCһ$KZI!9]&uȭ艉qa'I[N f9_]bgj+h7zA7ʧ҆ ^[;{D^?Ǻ(d_?W?C!/.ON߾遴x? .FEнK?G6HϟIi>*Dem^"䯿~nNV#cᲐ$ԣ1yh?I'øC<JJ ֶ FpF$H#y& )eȚ27HYQޠ 3=A4-|-{yW$)'ǚgZI{]7t'%qu6e.Se.U$C+L2oy ɼqhaf2%ym$`4JLDHfi=Sڮ'6q'X0oإM :V䋥Lք!8`Ra>۞`0+IY[;uyk>|E)̋vd}tDfe dl`JDFdo̻=U03_ \jKyH""}c79cE?T F9 _JD$ܐHqwq(mzd{^X'%!-%6ڍޤ{ܿ7xC)mE}fiԥ^ҢjiIK; ^GZ~k&MDCWøIIr5ʋ56RH]z#v3(D?8}p7Źg;ÏnE>MYDsќ9U8׉SI7~rgs!UW'^6Pz1кZbr<")$QUKVVd)7tpU-/thCYc9+!G h C:]!J(j JS YΧbPݺZA[xt:Xw1m"r$`Dw.})un\(smb lwP;wQ`-]Pxخ9KqDKD/ϡEҤ""'3Yf "/~ ~_7чx`2s?x6mjؗ+m,Q:r%7,7owuJu0fϕ<0LuX= \Ay1mkͿ?#?MR^W.+d3hBV?on>_#æ?Gw9Zbs2xU*R"8e~mPJV55zf<'G~o50*za ,tiXKeeH6r7ٻ=)-?ݔ_煑S0w ѕ=t\.1{(w3_q ėNiN\6t]'ގt<;]d醿Qp8-{WW߼Z9LBw^p|g;ӼFQ1U2O{e0PQn?]}lR8Hd\8`˷K)F+8{`ֵQu6;No~%9 \=LJ+k%|}"d9#hIEvC"{&^G%ΨZldcNrEafLda٪)gtQ,Lm2I䦃u+N , iw2]{OhKmhs4 S< w<&4dSviH{}tPLZ5QARi(K<\麨\-?xHXA|M tyxbkr!DX.oۗߨsA&CPвpfϲF( "63Z!HtLJ#C:2I>ȂZGxI^/T׶L(N+-yr}J3ND:cՅ{qxc^&@VڼM6y?qlpz/dTesA+0юw 7E$j( z Ds\v2?v+^ev3jOlX8KkP.LDB}',˙T+,YZ f% UB[ x0b 6,Ppjj;Ppu2I@爮,\FPV.WV$pe$gظ.\\eT[ի+YOy漢Y&W5<^MnhJ#څ+VWձUOeL+y0BF+Tkmq*9%.W[\|fɫ7s4̚J+7 CA-7Xn@x_nvuTVrVOݚ+mfRNd%hWɧTrVqC)LI@e^T~KնPY̵Z. s1Ic'qbmh|bDM#newwNUt2 VyǓ 68zrc@!nJ-n?;*1\+朋DhCL2+ė$?nk%)2EIh &AZZʶmO!gCz 3̡ W(IBRW+Q [  P-mJ*E W(8 \z bCeBQW+ < \`;tW U2q[ aj}>R Z @-P%뼫KĕьJpry03Qj4P%j!xH VxW(W]ZKێ+P)I]\U<ۦZ$jjYClg8^WձUOŕ8ӏ1M [:gv@ #ܮ(]),XW<<l];FH+r A, SjlS* t !p j @!Bm{,↳L$p++rCCnҘW+a!!!@< J Pj}0*pu:$JC W(]Znێ+TY8JY*DHcW(8`Ʈ@s.Wr)U@ \jCBjD\+ 3]ׅ++ Q&m4ZChHB:+kto_Dɪ}bm$4>\U5yWTyWDcXQ6c*Kj1J<[I1*Y6Jn1p+}9NCMV;R@P! Z @mjE5w@\ qX}-+UT+Pj;PpuJa6"`ZB\%W$.W*IBzq W(W Zeڎ+TigCS|{[OCOauoO X{_]ff:b~_F`w`Pי[>d9^55FЀu8,5ʶ3TZ":^ c ̐pe8.!ȥV+P~Tpu9LpB2(WZUJz' h< >ÖO㪒\J mSM%kd\WV=+,X0BMoyXMmS;VSixKĕ"`E1Cd/yfdanK@"UeMUNTFKCr <jSjiS*6is r  wj$W @njJ* bKĕj?* W Wp @mc`WRY&pu9hR0m8o@!WW+%OP `_Zێ+TiE ĕR萂A,օ+ Ab.*5Ʈ.WFL 3wrY0jGv@\YIš݊E8P *v\JM=R%^ęW㪒`RJrhx+JjMSzWRi 5WTcJhO2 \`IU*\Zhq* .Wxx PJR-[s(*D-Wt"jYxUg8g\Zs󇂍 CքZCe۝?TY9qp4 W X7~[5<\ڦmv: ,$\`e5 ԾBt@\I, WRiFy0BM_I-.mmYp"F+ī ' #*UZWRn%JkFZp[%L]?>\ZyRqed"ia(؆3vr9 f {WzАƮP W(W]Zێ+TimW+]u7ͯ$X&WfqUMmSKlےF]WձUO1a6kKNlPMlă*fJb+Sdya^NgR| y-΍ XіZ8(TƵY?8ceFK-!ۤ1qv?߿=Aɯ,i0q,qF)9)(Kc)S3* \Mi,_8yGF.X(ƟsӅ|in?ߡ]2cGf)֯#-z!UDgSLUfeK| p@kc\ *b4'nл<|vmz}d:7ڌf"MI>0-xL ,K2DRdjHXın$ N_OzK)xn_It/$vcvT7sX̥bbJ 8IDleS48D\9exS.b mXV|\ }9X#c~6?<>_&rA mu1AϹ_OoپWn>nVA(9/&vCj7U*6;n:u>+,AA n@S.|A>~S7ʣb$# gyh&Tӌ錤rA< s` d!cjpyp{$OH CV)+b*g3>$NLjgRf؄D)L{AZ[ qV]T[ܔ̇Bj!9b}\x6[֧)S}J7,*V`wViF;e 3Y6x/U4S"Β pOh Ә? 莽wI5n}`y6k 8f5}T,xTxw\_M^d^Kb"t#%-tf$D $1RD+uc-<.CZm1}&5koYj#WRV$⚀Df4s-KISf)HC y Q$6-?JƓOڌXFКA׏u:q^ǃ(%ZRS;:lmGV1qne}j3ΤT *4zߌqp9=7Ntӛj*6x[/_OŅ& ?7b:'3h>Ղ_CSNjW0tɤ|'/8y3ha)s,b 0B1+Y( @U ŨmGuv γPR+EKֈ+zBc 2} XZD&5,EN+qKSf>m Fu+(guaغܾR9F^`T%9גݧ6d.aSx4]A0:/P5Ks6Utلjeq3F8:5$fe޵q$/ ~? MrAkhSd;߯z(#QҐ"iN?ꯄBm$ .Nƫ;4ԤM„BWOX>XF{<.&%i!sIpW&nNI$`yZt(v.֟F̌U0VZjkw,!eV eKh*&z4#3QϝU!B03*P%!z~ÖLUBe rSZ+ )X FA)6 ֵt?7'!Ek`NjZl:(jdʬk㉏(|'!qD(e E!g* $*tN[kF^myvĎ 8$i2N%)pIB8j}LLăJؘ[sggv[=*w'RBx *qAui#[(6g1ۂ"W <ݥSiEhXF-YMSRq6 AI%tb+:񙜚om-*tTkkΧpl*L&˰Wɥ?m[( b< `Lf J)f5rDO,0C( &:4`ʉׯu hmӮDW0ҤLv@h5RJr1 tiV^%˴ OJ2XG6hQ[mAghBw<:gFOHd`T?mL c/ETڪ8 !| KS3:Vi'V?zmsᔷe!OE?t5*!r[: ̐K WۏZP6:-/wBED}&ӂ( eE Hd֊%j<_tD]h[5ۍK&qnV%l^{!M~y|[CVc:)\n"wp%Rg%*gcɑn#wTę>'ѳ I'(q}b}D;]TC`6.@'D4,r=N" F@LD١v#ϸ'ќyM@Kb]߯>tW"2föN[cj$f)u$6:_>opL;C Sbά%qAi@c 4OxgFqDބ!@ºTEO UBfg2tP%o`9jvuwyg[.J4X'VrBppsEv(vTy6Lengtf8 SsFgXzXboh9BoTEM9^y+^y'r>^byXo'gcxXs6?gxdG90 Qy@+^1PzFBg82` Fex0WzWH A#~0gN92eSFyi BHe [h]5fM>#Bmt@C耪CT$)"@J!ɊdKU)תۻ_Pmjo^rk׳_^f.>F^OqŚX D}ٷCRgz7:9-Ǣz~+a5u=L4 6z?#VtU hUCjֆ;b">=Cyxbmb'g}PC|Èa0!_OSGUzORH ? q! uwԿ;[p8\ Q9CP"Y?[ɽ#4{UѿrP*Je+ <2~D7gacxVeSR%sN]2PEA%$pc)h y}W.*х$Nɷ9v|E yfYV22Ugrv'h2  H*E:Pe#@DP!;땵\,փָ6g)1%맲uҵt-ZT6=+>fE`rh}IHqp'NԛH=54Bp"F#ќE}QfkV%tI!B$ƶB t@>! B*Bť9poЊgJsKn:e#p,bQ&ͼB®ys:d XJ(DHAZ3(i2@CrQ|Y#Gچ5G;I$MJ^Kb*YhMRQk#.M1Ji!Oӽ= x=U[](PTԑEO[(PS$Y/|&^'ޕ>ty<@-3.<3 |~jӆ.ƃZ6u=T5V-ʠ]vXTlRqhDTqkVUZZ]h'uWf 6 'EM oG'>| ,V;=;v?xA;h55}gw!BƝZd'k[*Rh_tTnx:Lːkmxyo{ީ (H^w^^]L^[wJ+:7&7MUQs3nt~ۭ 3e 1p@\p"D(8'Q1kR$%Lp;`Kn.kǶ~wGf8ncF`($s#>!>M}=d J*%ا 0 Y,^?_s"_OޙɇY7G?aG??@> :ދZ~mQτ)uI (K˵\.Wș[ï[j/^U&͋ojmz>H#Ď:\%*)K1iW@0h*^D-ׇWJkW8'\6a4}W5s'hxa X< ,"\z-LXa~8i9X/L \:2#`Fcɑ\0Q !ZhO@=Q /}NI@k“y D ṡ)ͱg nax{@A&.CED߁}Ĩ+-sCEƪ1ȁ3V{2s1[@"$mnv`!y֖e :91ND+"516O ţbA@r5=ÂH' !F6 c1!V!祴QM)R KEHE5,#ss07M@{/FÁ=R8k 2ga2k9kN@G*VBn.A"Ƌχbv^jfUT93~<]Ԣ*Ak+tAKU|e7?_7o%t檧.6/v#,et&ILOtlE[͵7_VcoXo튓D tz ! R((Ǝ(n41it8GG%⤂&Q1`G,z5i06@;T"c쌜]Wh;EIѣB"o3^PZSN,w_!r,A}/D{u-@+ n45Ć6/ȦP.-__$e"Yus26V|,F__RBm:yZg/\OCE}3ԒTMTa-^٨pPJ1]kN+z4\Mer)W{||/9th@Y2,JrU2WqV*{VFFuFZmffv f}.1qpˋmomVK~VVdܟ]~vCgLv)Ι(N; PKZʟuBq:>^Er_ф$3Az77䜲s*#g8H(n1!JsOa)/4..7+?"O9Ej$ˢT"WЩ^U1%^RCCϊ^Y~uYDcRG6b>n0Pzqh~l|mO4 A8^v`52,Ցb(xnayVŔj\1XqJBQe1E h.HApzP3Z'n ϶V|suM>>FOz@Oξ||8QJntu:Ʊ؟z040ͣr!yRJ΋5Om" rkɝ1N fW󞔣u:znt Za>yi@ .|!bK13P$ 6bXvq㞀gD w^+GJ{uU|vZLQv+|mĸ"Hm'̑)BΥL|:v.b] dY@Is嘃5,zi# 6Jym\4cDV ؂q1]}2Vex>x>yD!D;,DF (%^PB)gI+KZm")tYI"ծI5vuAͱlީṺE6S bX-;*T 'N EqBl-`Bh7Wm͎Mͦ+HL"ĥɥ0 lGb9SM 8Fe9Bv ;ѷ f[,qCFy G5.Q;Fa Gz$O@{[M@5kbF[n9$LyGH.B\p: [D}3mhKݱȂZB5B# #1sNxpɥuJLS&8bc ҾǢnApŻMޅDZjoҞ-?՟ f:,ml8*Kd*y(=mq4T@VCQEV;-)K (U`mdH޳0F_Ȅ!qw0?BI'ƅJ˓yo')LGZp使FRly߳ ꞤM˴ YP;`wsCB8Е;-cTaF_Z50%n1S]_ ^%;Y~ )LtxRlj1aw^taӅ_eg+=KgU ͻś~vz@ VT̉dEΫz1K"ŏiJ5+3nl֖ҧfH{36*@|(F%~QLҨ~͖tt[4km6lk\E_'34쟣KUbbNK5*{TVU6(י\;>~wߥޜN|uߞ~xVLC?yJ7 ܇=oEӪiMceߥ]e[ڽ.>-ħqX qv&UOAvn&Y:^vp6H YgNj@w0\]wmH_i; = Xf0fݻ_X|&Jّl籘%E-2%K6g0lǪWUͨkyTP4QﶁUx'/1ļ_RlU74#U(iXeC7&yj4?wC<3g"!Ā>O@sIWHY5c>L>2}yy0Xc7Xpo5@;UЄp=I5 (FˢwyB[wp] Zx>͟o.yFl.j7u;ˣȘ[$k&-ߨpFj(ΖM$ѸTO6=)%~[ɕ[UӼI7R. ط,\6 m2evtOGŌgsxGL \i/ nGmUV4K%,PL##(_?sX"n:ڹQ]v6ޗ~-D|TBJzDGIq8Q+Õ"EQ&{w=.b}~*P-k6l7G3d)~ z1^ hB4H"H98XT GV4rd5N,96lςe}B( *b b#%҄–k>zвAqVHUX#vHvB~+2d|'r)abZYYleaaR9 ( GI+OI;i ۃP`Wu@VtĹ>_Wbn sG}:29őuhBN%hn9SU py-=5QԆXzڹ6ƽJO݉;C?ȃ?AwQ,AG/+$UAQ)R"8ƸaHaC즸'n"SMsz%_O2R\yT#|U$͢.Am yU(+RR$~rN;uir>t#S_5(Ԍb][]M6 }{Y|aM]t=)U;e9µM<JcʐJu (#.jҒsku ʘvY^]uIFxQ䁕Q{JXsreXpH3j3 Hze6*+f$^c` ,Rژ^tOW#)mf|6KIXAR:xRK'):jHNLRhbt!ៀ@>fTD%t=C$bE7XK$)o~V)aXuxdx0TcI,[-rR%+vRQonZ{'gqhq0j?yG`Ku+&??4i-B.6o'}u&71.A Xҽo駷)f6\E8o_jRQN5u32,}Kbsl| ÙW ?{|p!1l0 d~mO?IMˋ6 UFwz^j>Sݝ -Ew>G-6/~}jɕ7m *-+h TŬz54?lωx]~p TGal"&<Ԏk)^x]Hs4Hss@ue Yf buh ]w;] ڡMk+Wf艩Oj s1YLZкyԺn[s<͆#~6e -]7?|xsaEs-ׯ]wϼ͚*łxvo0R~b"zw~4n{ [hS/9$uX~V{nk7\m3e~xNf7S0˵ԜhV3 Zk*,Pb7VM睦6ˢJwQ_\5\~ވOӇ@'#7[krWcm5M?zpzxrx.,<7h%>UA" ʱWA"uHR ,B|.uҩ'Ѐk0uunN~i*wQ{ouMO%Md#^aj#9,+rS8F2A4(Az77;rNY9cL o -&Ya Ȳg·<-NAs^Oݭ-h:<`Cog|[E.G.H ,Ysɲ.: W,XE(,kL4kyqcX^9nԽjn` 390.t.FG((wL(/H)f8!-ew_G+ iLV=>70e[g&Ip5v"zyHu[ LV-Πߞ.V2hi{0߷$]{U64IѾvվ8*Ƶب8P5-'L g O1`[lӓw?zA;j$\() h=۸ gab/'K׻() A5ke5<pR缷T^HO { -禺듉Sڲk#&WPQkssnɯ~h2j b8 .8\"ʨpZ5RZi)&8Ydi%NRvA:w+ /a@IG OU[Fb52H׌3ԛ(V#"Hj :gW["4vXS` AyiXI79KE##^ @1XˬU)u DD B2&uY͝F`!O0[IنYL5)vqc˻vQvũ]$?氺lh]a+N{EDdNQ0{/e)Yy+)5%(݁pU88Ȏ ZeD+2ˊ{AC;N(՘եpLx"ihVJ9L|F8_I2 'T`EcH `m8[UN?VAZ*TԬ5žGF""tb%DF eS@ )؆qKǫMHN-) 1Ĥb^&FkN,b!K43LM&w}q( !x- OdDE/YڂgSU$F)[P5[,LX3d,;D?R*If[mϧز󠛘d5`TBU' ƅw@-ԓ@%%HO@lvR ]`* c:E-3ĎKqVߎ7OXq]{- `aT,hDK<(1Q%9` e$LgU8Fd[|bΌ?4yoIsbr4&3=ʛh=1pWs gs6]PKtȭe6qV^ДF ZS=ZU ]A%q %K4z/U Ыo?/4?W4_Zh":.sS&Z_9_SE/1l4g6vQ(jjO| _"Ƞ < Չ8T3Chmsu`1*5J=X UEd9y))@%Ыlt]3N& p9Ld;l?N;L[\@4I(= sf*c)ꀰŠ"fqɄ5X%qLIb]Dzp6,[BtቪhyIC1R%rHC@=QO%(2YY!qPTYa$fL H0 53̃6)Z"JJ2fDhl٬  fl>$NEt5ia@@8eDXҵ)%c }UVߊ=Ckv/ytj/3tVa)bkkő &A%[5SV-nZ,ͥPHX$/-Ns u)w@"EŃH!)Ղ**ZyJSU'B Nd"F &fg0mnrk| rYn_mc)-wYߵ@2%`=ˉOna Ӽ竳OM^+?nb\kQ{K>L߷{hh{Fĺ F|n.0տek`аBY6*xtnJpvgg;v>X! 'e|UWm/;o/nޏ/[]537Ɨկo^W-&I> . Rj 3*-ٿJ)~Jpt ֈ+%c\W AUt/#P)9iq!< rt^2$g@z;psQO wOr`)a1w+@v7'3ͪv+-v]y1nV`/VpWTbV`Uw.##"r@r7{0-rx aU]ko\Ǒ+>U/I` /~Z)R!)N1F҈W\OAΜ]}T>hVyuft"?ϑ2ucKŤd<홌sd~#QiFmGrW!Cq4Wc-p3u۟o\b췣eML/݂y cy(3XB]&ѷ/SewYTTҐj,rc ©!Xj+R<<6΃O[^JV",ٰh Fw= R H" ꛓ&e9̔zȍUh9oԝQCFDɸ2hӧRbeYږ2'jiqi1Ugs%hEGX*fIs ɕ&c &UKkq >fv0 gܒ`v ~^n]6 n^@l@vHHQiјQf<С}d 1s* 4 ёn-[wEeIKl6%V- v8B(1Ƭ`x:ߣ#0c5e2#B)imsH TI]oH^c |sdnY!J)ŜEޓ"u FTGJۆ\vI qek\F(2c *7ATHc";ZV@0HUHBv;@P`9qGr7XsYlYl@bnN|lg#6o}l0xʕ898K v& Jz7,ʓ1P < qrD(o0>\SِbpX2;%$D âLSKXs!QU*k<. "m\dr&wlq~P 3X,A}ȩ2q\kq@թsN.թNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:9\)59u\6nEN'i5NpN'V:0:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuS瀝:dYS}\Sp٭Ʃ#;u2uSZuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uש&+ShS(U1:u8ԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:Q:uԩNuSG:9ΛKc6]O~{&KA7 | R_^WgW1iDnU""cQJqE"n<|c@@Ga,EgQqEtٻЕ%LNW2կv}t`tO.e80ut]o5ʫSwo䲜z6_K}@WgK4uߟ!P<}=r[, IX{>|j3ӧ:{Ӟ :tܞulxNY2{ޜyľ~ܽ^w R덽s_|քC/\p%,poXS4mjiCɔM6n7T='gг1"z9\LXBiZfٴb+(bb$ ZDBAIAEB iEt3a-EW7ЕMJPfV:BoQ`jJ[ ] EpX] BW6+HW_]0]ܕ ASP޻wU)PX] KFCk+A^]mQf#L5+Yr1VS ZJNW2UJ5-d`QW۵ЕWW2T1UNL2JAAQBWHl4rG2`6t'\=z(t'J:,btu^z ѽC866\k,wg?}y[iūSݩW8M/ mggp J>_hWݙJf ~Ι7."w)lRI!gC:{=B46=ʹ͠9S`)?6 ׯF Z>x'(J#dۇzcWCWЕ=ZUP1ҕ>Z"VCWy-t%h9t(]!]Qf,L^ ] \ :] {V:]Z]lcWCWZJrpʫ+}>%)]%]ƭeKw׿<16ԠlܲWw/d.RkEj0u8JvEՈBțN6(U(EWDW8nJk+MJPڨtutE?bѕNa5t%pj܀_!6+]#]`ť?|g7y3F{yQe>Q>Qgj}iKq›6B)5 ԀL4*1qAdSA&!$xลmݹnE\o?hReB.}һn-tg&pB+FWbw}Zy_Zu)Ѻ *YwG8vՍ ] pioe7s5k(^vyW?ܧE[Џ8w!WˢϽlqik5뫫k9oM~X΍5|_ `sebw֯h6~_)Pe( {bzG$ZrLykSJD3ԧaAW%T. gu['f.?u3z4}^k!6[j&0νޚ')ӽuu5o}{H6B3Ѭ;ĖmmU7I;-z^kyX&сwǀVF[څVQ9gv~ѧoOlO} ܵT/ TmkO6) ̦k} Y\k [T黇2C9+Ao?;~`vĩ~咮7t5ғs%|Rq6/"]Xت? f* ~.vf6UE/ \UĻP!,}MCM }i՗|HEd1f%QK\ԡq8DTڕn nO%4/]Ϡ60gvCl0Z^#{{sMOA)LA/`ְn.nP2N6@tXHB:`դb=]td)K&,D vұ粯fsh__xV]ݰip:-8)vyl:'3.1ax kp~yy&*l.N.z'Kmٝ(ϩ5Kk߹p3@Õ1Ll4ںlaybVөVCeȄPCw];(Ӑs?^ K"6]|2sYm^+w?iĵʓկi2zy1)+R'2J2%5!R\'#T0Gikω %)-bK;v+h}Vެ y5>FvZwq"y6NPɛxE` +NK&Qv.A+ˇ;A"D!ID s2 Y )XJ&HJ[t7()-> N;E}A"WN& Ni ^.PL2r< K\>XKt™CcmzUG\G 8xt` tEw_e\bq|3_vts&٨.ޏhu^:A bHBDȀkΈu5x-4xE)  ãl =VJ@\I>Hx؎a `L[KY-EiL2#I@G4.:(K_]1=#hacI~I\p)bnrn˓7|.{Mys"1 ޝ,xNb̖;B/TgNN_?H"GL4B4eӊWHbG ݑ͟O'oV;W*K<[ؙ4 2aAF8s6)IHk\:c2ɻ *DEۯN AAy5>I {3lonڋPO38g0nO^v86C*ܾhOE&?cGU)$h8"؆'(hnڧ巂N)n<5ADAT9q *)Cĺ'qt@^J7Mr ҕ9K }ϭ6k3VejsD&vhO5mP߉\Zu*vI޸:_>Dnma04T0݋<^ymfW?R&HP^W'huZ(RUtU/éB k".lwK$hNTL @Wʌ]*Yja76uaYOoWH_rmтO!{8'HW{+SD8*TӍ{CrFx$׎VFX1A$]Mb򝴦zk=KWӍ$t#J-2`iQ@V[b\Tȼu|,6et9U<9^tQF.v7()8HԸtW9z~ !.Qrh$J !s=8Ҹy>lj` hL"XE0l-mow+:dk|;&Zv~n(KYq}jB;S&zl%DuFᄒ:Zug? |ݓŨ"8I eL$ JiFLtFNi`F:+q 1G5Ƣ8i+3 DL0 r`5P}2186e@1?_L1Ο mQTJ/|*bqKe"$atJ$HF,**Duz$pyX):pkv-&eE4SXg8@Bâf1䃦a!0rYR>]jEj)Fg^O tl4lH1RxƇ U!0HTi -8푋qO@qGV ۪I2D,NK)ي*!;HeQ o T:Y' xV+ ]a0`[znoŹ\Bq:Rjm؆~p*Nƣ5̪tsD+фkk*|e\[^Yøh9;i_su$#AVmqo5{h}n"DfdDhc̮v^D')(r`]{O#Dz*#9 ~rv&Q6ёN ^̘vst 00ft~] RaHxot' aiQwN=DJ%5\Flb{(LiA,K J{H@H3_`w/&z/Bssr Jj L VGNO:Ӎ.L7ozRdLY=Z!P/?rWB/vĊb>1Pz~Tzbt'wX(\Ղȩv$#$rrQs-AVcgƫ뤻Vc=hL{ķ⻺zkGM ? u&:Wu"L/2zcLgL4:lszPk ~~4y9B1cjfv;u/p僝|:KipIA,_F89LJK?ZF9py]!q l]Q&r+I<JTmOvo F @(>DPy4Jƀpggk}ŰJw@.H,nrA> {l 5S3_;wh&ha^V7K%$RRJ *dE Q$rqi=r6Y-q j 'U/λtӥ.Ʌ5(2M,bJV! c[lE7b溾^#WsfDZ// mq 7#1Ƅ#rxTk sdˈr*jc?i#ץĎG^Yĭ$H{AG*K4؃ (,}%oYr[10}ig)-}ϔb:-nP~+g zxmةAl[-U{7S]\=ז2r6LennfzS3U2V # SMZdS$Qp@l_`BUYɏf SYc^P Í',r eH_0N7X&]ҼLpa- yuE) [ATQ{hYRRI6}t˟" 5V"*ԡWJ&'>'LBDZ|2Y"fLy )J:82^M}fW M7O!@!,TsƜA sF؈bDL$B^<E13U܂,rK!F|d@(O#BPB V0l^ DpMeQ>[B0f7+Mno-QcqmCLl8-梫zH# cFz~0QiwdvYYE1R04tV}=i1c&Ͷ?eZ}'.]N^PȅPyZu1,lId?ϻȳ`$7$%2QyxHjF6PآE8~-Nm%f1Y}|6 KH]C;w1EfJ4t\bjƅX~8CX!VPlz4nzXC L.9_eerd6a?M+WwG\ 0<%A}K'Mf4klfy| U F0bJrXB6'M3UonuɦV*$9䆅a4Q*!%Ũ> "rK%? vGO}P}>}4?`16Lo!@a5㻟дniho4| i׳ߤ]fڽ>_O2 X1yٗ~0+E8N|N}pYe(+@vu`R+*~J /'!txY&V&N.Z7wKZ Q֢PFXhRqY x汩NA7mZ^cd2|d'G?F'hvBH,e6aRAyvŊ C#Nj1 \ws*iã^r0}ׂ?D|$JAU`>yX;X#8d'0;F@wIn- 7굶Vvf{z׻dX;l usDB t^٢.6my?u擳!ޫC6nv|9:OEyڈ+l- +ĕ1Ioq1)>v6'A9߷9Im :*%~XZɅ^ZE<&ȹ,@o9>t1x˕9_YNwt?e+ ~1yx`-aMP~wHGqigBX'cJ08Jn y^owWWz3됸KqU+*S嶋LzJb/t0W^LlpBp>\7" a9L~7Vgޛ%$|0 } ,aƼ+H.+z(~iAcJuYYn3o>lf0Y^/[xM||d)%q,G1VKcӜL2Kʷ{L&\Nc`st҈[ʐܣS6zfԡR;`cfriglL-S2\?8fC &\wF\er늸 U_Lr%t W@.芸>WPH*ŕ!-*TP"O_oyrlV1j85rB8dO2Kw1QlQFBV:yTF)(1*”.g~!HfY@]h1;>WX B͟Mae-M%|+'/xνvq{{V@7;C'D6ɷ{e  }9.$%ϹVqPcsJY.-j,Rlb=o kmPبl_UtƏVloUϏ-`!R .F+Z(j=zZJvpg΄z9uawN=!Kj(I+n ` 61qaw D0B8,p<|)׃R'團o-MҮ?|ԙ-t90'}/CW̔?!9Cb=dU\ˏܒ I-٘+fȊ@#ӚG5<dBq2FNc& ')SKh%(jxutjl']@NR3UsM!083h G|D|WZ6~v袉õsy7'ADXGh&OСvCc:'< 38x7jd*M]K|)ΰm4Ë0/g% 󥙇))x GTZcHzl[8NzEu]_XwjK]0b )j6Qe#XĔH .C&12.v"֛{\C;|q,ĬKhKs7Â2}r[fö HL2*DH!%˚hsdˈr*jc?m0x!EJtQD= +JC`}Z,xlx6e3~}b`ѥOa)5(BSZKLSL&&ck 1!mf2}=SN{doIfP~+g zxmةAl[-U{7S]\=ז2+/~jTfvLoj75$ʴnjY lQ}lP&1OȐמo""٬|4 -6KwmI !$k6  ~JD3sZn~~Ӱ4emau~Z[Z55˷YQe\fdHU_O޳0-FW!q/;(_R+_%dHw ӑVUp"bȣZ { YPY:`dD  iux4<;Mk zUX,_G | 7ŕpy>{j`ݻ~'J9ŘIߺ.`ywLE!DlCR֯3_[OԴ.ŵ/Սog‡Ua14_x18;g̮a M+Wfڋ0|j֑??>D~O0Ӑ'o[IP& {wۇ1547*ahs>u3]9eܻ#ْ}$=b xC`͇U3(ͤHnI Kp=Nb@6颢u$hA(U-Nh%ĪU-繟EhQDiFW$a ҋ:M8;Mu]̜<&^!eEԌLSBc4Ҝ~G[P@tSM',"I&hY!oUx==nŕZ:lxd6' !r} 8`X"{l4g"q[Miczw,0]- J?e[mzM/y4iwɪemY]wDѡ|"쭅Qy6ey6omNEܫףcg;cg]Nƒng)W Ht {86)VVraaWvQ5Ҥ)[O6 ]x~|[1pNxĔPϕiaqԖQeHTr 42B"oe\{ N>]gg'oF%!FZH\+*면TX ,Wp$a F Iɱ[>kX=}uZ܅E4ڕQ]6vޣ|-D߽\BJzDGIq8Q+Õ"EQ&{w=.b}~)P]kAQz,hZR7-E^L'nP&ɴǀ)D$rj)@^;-E0A{dE*A#I[ʒccMqe_98"L@H Fi*8Z(M(l։)gLY5N7pkNc= b ijeJ(FEk[wXrؾs2,c-ўzN9m9lL(%/PᲂN+}"[f*]4.ճfg1W{W\=eu%=7'9S~٫ ( ™ԗ,FQjbikyOo7ПwPE9 (|BRd8j)%cD[V9fn"8%]n4Woo2} uI:/b'"2P/nJ1Q'w'm72eUhIX\RB(֥ܕaA šD)$T=W"d4 +S#(Ce \F\% >\+ct9{h#ޤbɲ.TQXeGjcձ1lP%۲WݲnqصJG˻k/$HQb`-- ai__&,AeƖ,rYjC鸖X׽JRʿ9i$mCr`<{vgrxi]7;~/J$Uzyea'EM$кez{0ɎNmv Yu-Kh9!ԡ畖x4v򙻬NAس|Gހ=,#d[]9yӮfBvœ_{_=Tz`oVĩ<pNc)w1Ρk9Ѭd/Ti9 D\YW_ZɋR;-Ѹbtv⭱5Tlvb0x&ZoI5g Oq|83BÒXHdR+d roP*ᕥ\H̰u<|:Ң7 &n<.>4[{MKxqꎐaV盞oo$Db*h.xq#%S86Ȥ=PPb%[{YgI% ant[ZbdNjP^d//+ 3fϘc5ͺ:90cLL5:DooL-KcI[.v7Dfms`\.Ftѣ1ܚ G9؟_Fvr3:y2 [&f|$4oۭ3;gI jZ}{2 "фDGgb":֠9MEt(%ryY<&c͇qOm<$"<=(Gf)]*wx1眦g=]MPY(Ꝥ/~S'MM޿ |4ϳn~f<` i|Ty6#'ߟ?}8j/sOGR"a l]Q&(ՀUHmE"g*_7jW74~ph I0l0PMM>m}g=ɋif2|;D‰s+~/U~_EUa5[uk;x8 P|SQ+&U*zeHVcqςԫ`s\4;#^bOa9LscH{7rO"62z9isVO]Ma[dMv>K37q!]ֿ; 3eԔq N焠2kR$%LpF09G-s޳]=fѮc6JJ lD>F>WE& ]2"Rb- DE_i#}Ǖ6("H*2=WW"ԚY BS%4,Ŝ/\e*uֳWHRDHƤ13l#X10B7Bbrl-,gMmyZ=k/촃8 nAz(5pD v:U排xm \Džp y8FE콐Qd1bsBr֔ x8vZ%Av8أUF(CЬk `9R h] gȄ+2IfjKɖm0$_TlPiz-# Ɛ$.% *9?_AZ`T}0{L<$/DD$6K" JS !rz "WQ@HN1 ^bR`ANy/D '1AXJBFAP&JM&7x*[z]l>r9qdAk LTD0] lAcPl)1fb͘=fJ=mlVV멿e*&OQi2j ;.2Ezi -\N  V2`"C=e4mq(Sqɖ .{ ĭ:䀸>HhX؆"=wA)( #k1ђ3>JLG(1@Y@bR/#-bΌ?~̩&Is梽<#4)&SfK}u+7 S{>Ri?0t>g I&RzqdTٻFr$Wzl^ 4aam4x\ <\rI.IvLIe:lrۙd2ÊTthI#Y"FB׸F?' %x~G W$!2[e V[؈he"gbtM*1fOķQ!.:2tﭽ:c<$؇Aam׏i8Rv '2 @9RjQjghQ-m#imvV4ڷiꇂ=T=n(yK 7qaa"k R?'/8B̘ l"$(r@ML\$|9 :[U=}df#Xoa8%na!e]w? VEnt7dUy2#3As*ZV0*IbyްL""{iFi%2pؐdd2!"S+D lVZp+;rR6%Z"x];pv3B3~?$A̋p-㓮$:pDH@WL]2$323}kz-6 %-kʎNGpT)D˵ȵU,vB$rUAs-; :۳r^fȤӸwxz%]hNwYxZu"],3cwtQ|`TJ:$6B>3^1\#xC҂I1{cݣ,Cr.9KRhG  fXeZm8yjnFWd+ę/C ~YZͧ;zɓE6@&}0yOv6M힜,FWIfbhm[yVyW2qAƑaVɨbpPX'-}E6 ffB)DŽɖxp&qmd<#ij=_m8{qLhn(PQcO*sN**p9e  FDž:5ѡ*I+Y)pKrJd- fБtFhΜwET-2LGD2AN9Q>]M\N|46v6V%2J!(:!#U0C q1,i)_%:0ja9a8]m0cxjMx<P?`,?X/z!UxH>cBYL-PΩ2b9g }*u[`-~%ٿftEf -1XK 31:{Zt;kը>l~A-qًڑjI> CߪJYxW!*3믻0no/MJ`]Df:0!v(77%j!;i05z IjSٿ*EH"] ty}~|fe<՜o^lEsr:lURȨKg -Yf( I˒Ǽ֪k,A$/||vbc8rzsp&b%O 6KV[j˺/ oPrcy+{ xqA<oI!F#p(r9. WW}jR(Mx);N{}9Ō-G]HeEz"}O\e4YH-*z9*pHcA)BɠBEOZP)c)dUA( shҵtݏ;|y?kWiÀtʞ:A utidlI2"C? }0V݂M˾! P : Nj2$A#D9ˀ;!TIQ[&coH0}B(9)ٕY"4Q,w^Gޕ @shNoLtac<\b@&C@Cj 6yv&ka4V<pp[l&_w1[K8 )]k!Ǧ?UyEϙ>AT T x5Oa{aFjbh;bQ>ۇYbӕ|/#Ogqzpq~^4ZaG/XLS~d[YlWt}4{$ï3^/sBCz*Cn{"|Bp:yDŽmsۙ>iBuӽR<'s0zx_.'Y/X‹:Mo / f[,cMuPM_GތGwۇ=+&udnݍ/6ݨԖp$wr: g&8ݏ+6Bk{X_*_$:Yԁ F3^K`E!-dxH24wXW/=V/cY 1e54rtA-/Q5SV=="$}?owt=w]/a[6)^qpOs/4 r{=gKnz{4ɎN.u6unMO=?4~aG=/_޺ƣіw=<>k~s'K{حq&&r婷lw//[jZ#ሮ:|lzC܀!>[.9rAK/w_YSA$ۡRX YfO\7eqz8:?}Gzx{/f?],"_ ;8bX×?M 7czk59jx Y*oG%qͰ^u!sd>o0 㫛 e|tzc3kI2g^ f ˚49O#dDR􎹔Ay`Qp%U3_BH`*JJʢ }bVKywa;uzѢ;7l'Ϳ`C 6<[K4<w5 sŲnA.N,XM(,[ޱLkbڰ^5NQ$Z6_.ηd%m[Lݘ;]4;'[oH<ǔ˘mvftr0[&fBぇ0WG;Uu'],\yi>y0A{9x͝F&e2w+ӑmœN] jƾ˾3СjSAǍʛbQvp8᭴qRaoyW)6,_)%9x w%4!8s`%!bTW .?x89&L|ޜuej 湕L\=4a;[G9/0bYǓ/緿怃T&;IfJheK!h-Mc6"LUdi-5OJ{ޔ ޫ]=z}1 lc4#pIŢb',;@hi#*E&s[;8dgRcyw%N$cjfPHyb"X41'+ސ+ K%d!hʌ\rf$- HH!:c[`Qjˢ׆˵3 g7,\57~49,9mʋ]s(=N9Ѓm˯y0)VQLX줤6 _x H9D$j6*X˝ ^48g *ge>nw1Y`*cPBJY]6AG"*X̯Pe/+@iJu!gƋ!F%2`Um8{UN?4`ZET-{ 9%/d& qDb d%D"%a ǔقB Qh #HMkrB 7!zo nB l(YZsI<8n,sA`}6M0>Bh;G[]6G9${ SUЀ9d DO8%Gɐ?.9`@0 {`S#[ZB~ ڽ}- œlԡno6FEHM`؆"=wA)(4Fb%g}(R?"1$iϟ{Q=#p:] 8g%yy9h6{\8)Lx/ VHl l#t@.mD[u ÔarMIn9أTz ]͟+bM~&|ebo҄Fq"p(p+b@ W#qd+8FGe7MoQ!L224^l)m6(O}9E6Ұ 2Jtew'pN`.>}m(.ޗbJoYCl2Yf̛sERu_&^6ZK/Rۆm'vw/hirڇ| 2¥&F9ޛ[,xSP eŨLpܵAwP6ȶq+@A/m GIS9B\&`5/Ӗ2'<'@RFf#6J\+OT`( J0pR$i9>%s3XaJI_im{_ X< ,"Ȁt\2aMl6a8LHb0}4,kGp S:& HBK.(-rzD$J 2Fp=`BQe1E h.gפQ+ɘufM ~>{$H\Aa!aak;А54tLߦo-^Ρ)-kŒNan*,4Q68WP 1tMt |nt#ul7OnG3`\{C= j2cn

XUH9*򔦚T:DX &> Y[s7 FΧc\hČ:΋{a䰤|0no!F&jinX-H *gX8GBruМ(Jǜ(W-]|c[m<>5V Պ*-P2OӻViJR); 5h!.RZnJ:7;~.Qy;߹v֥Tۉ뇷Iwl }[I)э0n]J&R+ L8T9q[.^z@$>l`JsB9Vqpjy1ɁnG8p"{~w*cL/g~:Ťɇͮpp/67|Ȫ 7iWG.\/8k̾Az]l1M9,K!`Yq7kyppkUkգ4ƨs&6PZ3Cvm,SJ*[ u77a /1lޛp6K~2"y!7Ԡf\Cݓof˚qP:h G` 7]Bυ5x(Kuy, [iXA1%`ixv\7i׸-[-ZK֣~+vvñU ?L,qEr#2\NZNi >8]꯽n7Cy4,z碎Fȵ>g¸ܚhrgeniޓ mX畚 1(͐& )`B@Ykfa Gz$@Y'^Vq_1-7`œ h}Tl:O{@M$a\7cb_!hȥ@8s:ZCbe@r!E1dpP%k3?4] @3LAV"WNUZM@LR,:`.Y  m}2PQ XdPiHGŜ5\riSd N3o^,zHھ\t!8Z=/Tߤ9[,ɧ?՟!;KӶ>D3 tWdP^. EjXʹl2YQ.QidD޳0z_\!~Ͼwp/E ߿L˃+ݞNJ"/pK<2(b =Hi*2wk otixT&.?7꧞g&+—V|eWĭNi6_Lw2{(Siحs48yS#ƃ9Сz/lWgٵ?%ߪ3G7ab*̥)bsv^iL^?e(sSLU +VV]'+[b|yK!Qte3:E/`b"J-B6;]/V Kjur!nu?äh FY/fiW-.OPxzXNgsuxϏO}G*Vދ1c(} g'qnݔ8̙H1 0w(Bʊ~u]$[Нz J,7~K*hB8a)<I5 (FˢwyB[&C\^_c|oQ3|SCRqHO]t`; s4TVSYaXAm8D-0]l^~6իfzsz7闇dU\ۯMucD|z":oT69ugӻ96#]/תʲfC ÙfhLdR{ fBrR(I 1h:iq6 m"\BBJI}\ /Pk9,gTn9֊j=(5B}pe h y ιT^x"F k%WVS *[ݎHk`GKDj)pގv&۩mw]$_:-ђۉ8pAS[ Z9GO JR<3102!HKХ$v41qaǫ#ɑ[kT;4cj!u'ERxdP^}*.rĸ!VAFFg2UtՇA*>E;OjKP u *eA ѠJx7mX)#EFDiT-AlVZp+;rRs.je3שr jyQoD=+ԁ,Xl) gTkH@Xmɑ'`>|0j ezYzh̛ih,44^ThvR5FȘ tB!;,zyԶ=E^fȤ t^I5* 7m+SejR6и==f x2M]:R`"[%hUYƲNK"Wh 8L 6C!bbFTWRǀNrLaW0ÎJ-*5 9^ m#v BWVuz!.? w8Imys}Z`s`TJ:$6B>3^1\#xC҂I1{cݥ,C1]r8< :X)„h JNJF"]ČpYdd2N֜qLb_BvU캕ɼ?;Yy?kPPc]ڰQ j>}^85( ɲ5.s)3 Jgt(xi AgB=跫K1sS:-f`am2O>wa.;jbk!jm]֦_SHbҾnА;qAjZP}ݙZ!mċ;x4qN5xc`;+!߲uo;跶6n3 Wtl3*dGdgG'G7^Hp5|0JL\ j@+994UoT53]C;T֍3>4o;%Hyc@cNsL:*dgړȅy%ϼȄ*q^Afzcҝ *sCחwenM3͝%#cmi[[ĸ#ϋc-05ʠ)9g60QR36,y h$Fo0a+^*ۛشԇߞAtL'-"ǿ[[E˩Kʿ9;;8_tU&un[-&*LN6'(&1'@hlY.iv<nD̴֒PAO7>'Q٩@Ӹłu?ޞ ^_%s˪+fdHC5EAB'z8m;T . `2UHW!}8u0cji>$yqzƖnܹ&r9zE'V ~yFgɦ155*g*CjIIdԠX( pW60{0d *I΢ILJ %иFItiRLV%F  0e"0B V%)Ѱhe%ڈO W"S.+?6qV!!Ӻ,YD=&\)RB2\2OT& 9i\bQXjNv!Ģ9g Moʜ-7'WwZmcz0BW{F/ONnvai0_\Of ''ax2fy'imJ{x4(Mnq⦭LrJ{w"IŁG>}[GG??P>|o~FPS?y Ms`<n>ֶ֭~[K [ܚO=mj>wθ|Ll(^zaG4Qzq~YW֩oGt/aٟP TzazU?. ! ֪y'lCJ"HAWԿCDTVL*ȹ% f}ӮUzg!'D 2D&ʐ L6*r=g/uG'e8CiXa]:DtQ__ E?իfmwfvszS&X;n uRr"~%]6K敹̈́f|~=O&'i~Єay /c*üo"2Ӏ AI?IqԺ;}(I?,dk Gf;fbNHGX.yuvKF$12ѭ/'/ת̥QY9<;iŜ2{FIHX48&۱v)T=poF_NOgC}q6 L :'W/2YAnOzl7(\9ˋ-=W9,"x2^V՚󰋦ӢڴCUWU}{<6/*rK+-kf'(lZ6%bPkRYiMEyTr ֜#@ѓboUׅ Wg_WöS0w䄦 +5N3BB'CP9dt N dKY"!)KT胱U,M5%k-5<,HP@N%@$D:CgY I9cDGYs-w{u>| HJ*$F}NiFD=Kd,n _ ϐ5Kuy.==lNQD.q:oCVy }IKq`V_Ai"W"EnF3:gsfPL\}=i.dXbBS26n:{W.I1.K0槮H/+2/RkĘ oɤ̳3Y :偅C̀r] |+}.i.~dƋ)MDFL r0*]*. %-BO/R=¥uT*WmҒFR 3>]#(WM6ɞ)wԖ7Wؗ($a4,\j̥aQ!bG 3]/byxA&+ӼiSKP)xlq[*p]ɭBr3G=ؘDb&Z/)۪%)F{ǣcHˉd hBYJX9TkcKW 5/XRw%mXWØ E=4:z恣XV,dIdl.,U=ZMFKwak64?-j Z,sH-:Z=miꈯqčTt户0G?&6{I`r1BTS_h>?^LtuNjOp7;}J]߅Ch^`iVu9Ɨ5N2oSesxfݴL -W*qf8 ͙X|_]|8Z/4o`Gam.zN8I'ϺWs3wMƉN Fir#L*О쏛^`g;Ȱ~M\Ow&$s'o|@7U}Fc~8kp0p?4B_7s% FA G\sQ#'Ch]Iƺ6WcUlNJN |Y~l%0a׏wסwuFcjy{*JdR㉢.D8e:j4 ;*ib[qT혐ʖ[]z^ga:|z]fy-0ݴf7^>W=P)ѯ;P)h0sXViڍpʉ ` 61qaov ۼ,p I'j ߹\2x<_bap[][cZPb-ʫ6T~c48vy:vj+,f5g]-t]]ٰQ(6nfz6ьʼni͐u*1t[4]S9,4ۖlmUV#VQPni "tt%'#LC{5{l=7>_qo-.%N;nMlsZTvUe V1 VIǞvZœNY47YF`'}=}=7CSw]BBuUQ}?-pR`m i}D'DtVDɑ0< q"Il#7g7aOǻۓN\l<73y˲Ɠ4;)>{ _<`N-0wv+&KHb`L'čPP `yRHWҲO[}k dn\۞۔۷ůz |@ZFW~Kѷ[_3p&T2 HIB *8z/%SIs5sAs1q))bK+Zn#ru[gv`It>FuW:jN LD!FTF&N!lAJ0:'ЛDV!C(;Lm˛0^Y(3Key)#ⓕӠH \i"Lї&2 NR8Wyϝ*+KLq`JMJEå_fkvnah(݆qS|}0'ڂ]=C8l[+ sX]Q+ڔy\Nw(RTeGՉr ExJ'TkM }Fqp8V*VH vLyS2fK{p &`0E(6ҳ`U;㨳2e[\‡c*'[LY' $DP `U9;UN_`JTMt=o9&.$LpfBIT%(0#@ c)ŘDH ǂt>+*qŐ#bHK^^ȝ1q^Pk6H/%)ȱD)CƆ[6)Yb~ Bq!֕ꋭIY"')(q9@E%w'Ɓt3ˈc؁ǥcG ^U-OypŶ/mo' 󠛕d䬿]+bq.xɅ >qÂb(-#^INq$h .C {& VJ@\[^S $BؖanQk3` `5J@"#!)B"8)xr 1Y*Ws1R_?tyGy[4O{8{CyLx*L՚`U\5U\5U\5U\T]WMpWj&jj&j+TMpWMpו V5U\5U\5U\5U\5g FsQ\¤ĝC&VSg׌3 \s &hϨ5@p4cvOΰo-P!*24Sk/?_ά^hR#i}A<|N Oz,~;s3O7 <ОYB\-ydis='"_Tf!kL$SɔOUE%XҊJV>f)}NtP:ķ{x19!-eqQ)i=6՜{ʹԤ/TBԴGI Q6-,Md%Z\; {9Ge1#T蕕X]%x^V;v&F& Tǫ̇󉏠˪J$k삛Vc%^fz8D^$(*D[p#Q^8WQ3Sxn[G_dgUP8"KseYVIh3#phY,)"ST"A:IʹV5FkP\=ɻ%ʠVT9 >Z@)cg[lTmXo=#ӗ/gD%;i0 ~L*^͡U8cM˥PL74 d5w@-AZ( $A1W5[5ۼ`1ݞՖ7ӍoJbQ# 3`m}  c.J~B07%qsh/G~|^\ÃBh3S0s ZDM81f$!:GX{E ʼ^dbN+&7ah; tȝ5X6.:ϰvI8)dz8DmkrݩqiHdvn:Ju? o8vsBMu_ W"FBdM:l&BLjtj4w%Ō, 1L4H.I RqfII aEMa\kv{#,? Pۜ"K ^z!.㏻ /o5jnۚHfP-{rY'&r&JJEVsT kr$N&Z B"pTAP'5\"YbVgTJ7*\ґǂlB#IZ|EIYQuSZ2c  OO\bD}! !L"`"2-X)8pp9;hJ"y}S}jIg:`cM6U-1AEj0!"yf>ȭ0rZ|64 ~$FG~RT66V$,0HiL TwcGx@ZzJ:~0ýc+ƞ=ƾ]S_7TkPi~\!"I`ƿDT \ATKH50VKcӜ%ѿɻ'c) 3F+p#50X 508{vww%j l@qɃjn]؆q*g5Q4x|L 4$О: zA.ϏM.͎uRw??e Vx 4{>Ys5>`~o/kamt~Q+~h%PT0ehhY 4_hΌ}_.>LŻ71<@s 2ͦɊz\GzW1`H;Y96m57_dbҡR,! G#43OpؽwlL(ȩv$#$rrQs-a©^㱼Ƨ\HBs_ޔgY}+)&IGVX.0 cWZQKF WY$Y1E//U}S]s3wcikNJF`w[_֜oȍ8t}vq͑ ȭ$r ވ\8}dۜB))x {_eL(DF9,8^::{aD[:Ʌ58dPGY6)L]&Lb"ne$*]"WFxĽ9礅r>\=koGe/k;N.XఆO'd8,eqgQF)aLDfz_y+n-ж8;Il$Kbعm=2/Rd6\ dY@I~s嘃3fi# 6tBXWT^i(Q+' Փ~zrZLr;ԕuwxYGN #j;rJ8앰\x׉%Ga50өUM‘SIH ˬc[u"莈\cs߄M 8De9](y\nv=fʗGWJ?Ԉ-}q'v>n&RKySmQ.]֙99tIc  3K0+hB :}Tp:O{FIý/ՌB9$C ƑKYA>:̕wLrc .h QQʹCďm9f}N Ty^t/nş CX!Qj%ztP[$Bu IHW"2}mYF-" b X ,<鈱sChPeS,= ;^./K}w!(UW5ۅdŒ|]9)td!iXW"U7ͧAUȪaWo&߃-Y

*sG_?xs8prXlg&YK|.)V=?X>lBs1:]^mޯOL"P0X}  YqοfWfTΪٹ ~nY{X VT́f.zV+Kbe׀"_ҕݩ̗oXYntH45 CQqeT`X/XIPْ,T6MNQ -ȧL, ezv<ԻQ3NM{i7'$%v=}Nb@WdU$hKtW5;ye)ˊFv[+}"J[tJV>1c(m|i3? fڔj0s&B $wDxQ3c)N?c4NWP@tSM',"I&hY!oUx==nv%e>E#xɻ!ڠHAp"=uX"{l4gPc4`1ñ:wV!0vZZö+V=MoQ~KV%kEݝ,oT>_IUŒ͋q^U:浾͔a~o>>ˍ܈M?w^\߿FYD"3|n)F3R[G]l;{po0,\kfa8? "elhS۹ -w{*]d|f0$xKVW!&ˣrdvԂ py.=9Q䆚A{ڹ*C%e 5<7Gxgu*>JT"ZFA@J!#V URe] y(}. i.,d; UIp|L1 t٫[QAUeQJRE~$E-.^Z%aqN= 9XsSMD{ߥރtՀTh`yqiqq&BSz1bjeH1,!?ᠴ Z%yΞ[ P;dsi7E5eKx2j5V)\A(V"b+*+ S>WZQZiZ/Z)Ikdv1hYFJ6愗.]Ɠ/\e8/]NҎo>=A+P#3}Rg80IktʡuyzsNmn(ٳWZrB}Zϋ7wXI[r3> n- oD~(_.Ÿ \K͉f9S ~r˹͵V ZϢZ2$b@-O䴸:GyxrUI[ck\>/.K20} js;ɩ۬H?fǹ$*(@$zA9 Y"Hطd(}R.ḭ_{O=CPGAƽdn\<:G5ҖtcEYettt# PS vt9,JűA&쁒Vr#U5Z. -,uÓJ҂7;!Jz 7gp]]wGk(#̎00W4B #G&/&J\z05WN:JoVGjbs ~{{ [:mqF׻Ev˭;Nw7ǔQmM]>ŠPfa B}|,_qruY<ƽ֟uv<5,VNf[8-;6~{iФvKc\n6ָӮtX' ڐN:Wr34*\mPUt7G-NZ:J & >:#E, )i*C)!Ŗ{n|C1'g;Ahk0w}vg;j<;@sڜsz\x&{Ic(*dkk' 7UB)Lc<̛q)utp[=Ks}~2<%4 u KdpV,Cj# 87)}^WjfxPz4Ã$| l0%wO(?uӫ>msZû! DV3ż^gVo)eu淲0lFfUd:o| d~UtlJv*ev5E"0l.A:ρm֧i'06S&<ȥ.ZY+kJRK+Hδ'TcRq`6kʅ""> eNQZDp"2j6`q5%]A^:Z4'g%q]%9H(N 8qm):B2bU aLXI5=ioG/3$cc' `B,>xMDImز]z} SB5Lq GH0}?b$J]  [d0EASNG`T<ƀDEXZP9 T&vUG08]B%1J)Y: \tP$ 0DEk1.&l`-%J(h[4n-yN)DcPDY%4ϘBDPH]$CĖbfP2>Ա[4X;bQo^;+j7}ې .-}=lR+h>m%^gSq}_F筯|7}T${ӎ]!QҲD!U,ܵUR/h>\Y&=)#<Zʟf;bKT[2`C%*&VpSyaۢԻzK%3%(6oRMWpHX# l/ブlQ[KZ83^-ǚ=GDZ;5 S.2(W`:W`I#}ʘRC;b4t6+Q3ߙ !KOAjb (\;'Y-I * sTM lɩA E2){[1umdLbO8N tf20aCV9"VNSVXb{ à&?"D`1y<Q2XYnj8h.`e`]Η ~RvRmd@Wa bꅓT<Jn5 mAlD3Yq(D4* @oЛK$+atkI {imF3d߇5/ELՓm61eө0G~DemeͻZr`Apc=ac*lA4J!>* {:O)U*$aW-_Fpaao:\3%!hIK-(GInAE!@12 J9wGm9fA=)VF,9}i8@nڊ?=X .=C)(J}& &)ZE0g @R_":E",<鈱sCK.S20eS,= {^-/)õ\v0f[U_=$`fIΒևhYVbD!o GEQO.Jd90PYi+d,_S(X?8Зnd@_O޳0_$+;_ҠDK_$tH7 SHhWxJmQ>g &'U£|^z-߾x 2pEhtvpUa|U|'01/:4ߪpxO||&1KМb̤N׮ X惩#TfV sarl?9[{jtVϮ>!_;Ua 0XQ|ppv^b%fWװEF?gEs3ZvFoBM+qҸWhZ4/h2[q IQي.T9v=f{!Vm|V.Y)j I s9q<)Qbch'5etjcҔr9l?O?wۇw?|:DOށD``u# z$b{-V]- KSZ,qO]!ͺGnX&H$2p X~ۻ~0u)>,9;7,ũҮLY z3_Nj@Ab0\5EՏPIB*w @M7Ūy'm|bikyng1% QaPҰqC&>[A:ݸ)7iS̙H1mRRVD_<  {؍.wxj؁.;UЄpRz,kPEFYsk{݃j#=@`4  y'SKdQr s4T$a) r, VPI:4GKmz5jKM_)Y[H=zE7vVR6v<^21ye)y6omg74L#6܍U[iwQ;ˣȘ[$s&-ߨpFj(kg% Ff'_3 x\Y+fҍ bMS^^o^o^ind|;9 -}D<^կSB=WK[Q[F>"R) D*ȈJuWWvU.(>]ĺ֛~;8{:z]V!Ĕqe81$F;DEd{Ȱ1΁=ii#s0RX A80RZpHٝDYS"w䪙\I~xs9)C9w|x予4xK-lu4̉ ̊fv{ս=3pxZSRCOAB໽$5Y22D s9ƈYƐD0 2blS()@B@JQ&* w=.b}nj3p3[(Ew⽝EgWO]- J6WY^Ol8mF~9 Ot jNhD.XMё E0Vx0:,Hqٟ[ꤙ̆Id=xNjK( 0!!j  ϭGFQJ kzo͞z{F4f >5 r)abZ 0#z-@y%'@ AOi=vHJ{vXh}.0f`QnW}lCġIh X}`9EY2mNd,ga"F x"YD a'+$UAQ)R"8ƸaHaՅVYoaNq]ҧ J/<oڤ&?wDE]WʣVU^)ݮ*4=Li.MNntԮZВ8Qs)&"U?vհ']t)Uwŵi]% : yLؕFLj!ňXFO(.jҒsku_a{@mѵiUwVG-q˨XX82PDV8UV| I Fu_Rkdv1XYEJ6愗. )cEt{1 'ijQJORD%ZG-Q |Xn92B,gH0{VsV3 eϭmzrrNGNLM*<9IE: "JPLHmp,:v<2ƃ <p[['[q<Ӗq$Qo5j|3ϤqIQT{!(\*XWϷJw~p僙~8pS5<~0KO%-N|%Zf@d+,X " 7ew/.I p!ؕ `Ypf W5t<pK$/]{ ҟDV1s~./~[ԬMu"-^~QݬnHfhZ7@Y\9/Rec7E: vcv ZjW^^:;N}}wЀ ʣ06g\rǵR/# B7 HfJ1C偋!Z v>2-šԙ(Ut^l<Ǝ(m7s=ߪyV`ʣ|0՚O#nؚC'O c1,/ߞPRATWدsA hNcA *&]Ԝh3 *\kz,л^;aW3KGp|vUA[k\~9.% Mx2;zMӏ@zgǫ-sCvXPQJ {,$R[2R.zm+J5_ rVۭ5_~cxOM[(ۍ{Чe2#1 /馧ф$3Az77;rNY9cL*# -&L'M4;b[[gi#{C!JΦj5jxU^ytyw<+-jGG9N:<㪕r!MA,}tFXXf1S#*TDRE-)b< NQQنQݭvS7 ȍL>~{=H+;T&f6l\*~E*weqHd 4aai4xY`1`e%[%^`:Y:$僫ՙ̨̈_qc.x̝Wl þ1%BWkHqo/xQ-u5w,z~sjkx~Sn?YkOV[_ A,^@faQ֠9Fk56:MxklbK`6XvU˪w^R+Yjcp@l}QQjI Rt'(U3K"ciIWCf>fOubM-}Ii_dmDS o$ KUBk`d" r>k)"Fț0XIE[@$mu H$LJ[of4g3ܴvq. mL{^g3Z$P+SJ6_h]'l0&kɉrlh\QȔ,h ,^hsk5`3\Wt JKp88UCZSt1YA(<0()U}F^6'Q7ZN<) e RiM KP#FgUA1X5gKCU=rc,sjP[c5xt0RB6D, HJCDfHњ1ERŤNȔ'bH`%rD W!vk x& ]J\R^Fk-3 $脇 oj7닃?.ahmET10@% LO$1$-?>{$=@{`ﮙm,}pf>IvQn̊x4t2&Jт)Y0:Q+CRl|LF1fBLVJs( Hk ϲ\.Glqʹ0`ͯGTȊLEڳ-x)h ɬe*(3AfGdhwycD3{*F!ǖ.ͤ.VwC!t4;/?0E` F=*&x2BtHC0BSZi/HE%7F?%ss߲<%oc\A[q28EDAƭJlTL }rQId19>MVGTHj#Cw˯>ݓn.4 h}5η0Rs xt.C).úǞk$CRǦ4T929J Ur Z#24^x=4qo' 1L(IdwI9t|x`m'LYB]H5haU-OL#7Fa iE}^uo6_u߿^q'B"{d;1K ^PU4 p'RX㩲<%ަN,r5㓮 သ *v![i6 Cc-tK+:Ojirzm\9#RtCMfÀct{,ojIYPg14 t\G/\I`4ơk۞f_ Gge)H7edwO>hXK#S顽0zdc[E3]ZlKZ0(:NڥL$ 9Fֵ' ԀM0$6Ӷ0{@@g9_CĺZ )/;5y{iU/b%OW!;vMk"M6\,6IRE%\ @K3J$$I>L*1"MMq-Jp%L>Px8c)TYTk8[qHhPu䮈bOg^E&JaDdK-H+Cl*VnA"Vy} }azl_MՉs.~T¬4>Q(yca䴤|2L9 1:~Pe`cacMvicT ;HZ H4l"1([O [rW{y0cۍb{jOx=տ2PPyPPͷ2Y?gu ѻ`J[~ y-/~nT7>0ϛR9%WwuTp^~Ng?XG2A-hNݿ@* vMnNy +&&i^ե)gC!:!9 Vfb%3] -d u :.n8ohچl;w ՛ LfJ|}ʇO[HPw8:%vNV;@s^Uo~@u1~8'zҽ Np "{#BY.tb',"DF<, #d/%/l.U^iQ{nǜ>01:RhtFb0uRy&Y(nžݾƷ]}W8(?J u}.\ |'l5r}1]&;(tf3xwэr_^gl]}ZdqDv(!^9TAR1UP}IeX1j MqM;̭ɟ9/|Lw3>_*,^OH4NK:O6\҅|:X9gt4YmmٲC5hd2S7^N57ZLN/J&*D v2}jXdq؟6Z8i6E;sQS UQx*J1zmKiI&%eF#]`B,ؐ𽄛)Sz װGٰN~Cvx+|cxxPgE鳥)ϗOFXKs畤.°cLPB HʲDS%ZC1M ,dd1!e<ȺL2,Hy2FM -uWt9iwm8飰3khX\#'19v5}ö ]sL]0fU==Z&_ۮӲ&זGahm=#hA6U('dC7oMʺ\pMy1 beV/gNN\8EkQhXhJ/Ľ>bWšU\j'*BL\$QD0"ɺ @=1qF Nw]۰ @nh 9X$5.12{mM-WO_/Hռ YsZpUs>>wvtsFL3|OY̝w9QfR8K/D(Adn r(-uGݼ֫}>0.]U' -zFj;?8z.$@NQt;&2 w-Vl:gۈ$G>|]5]Q36Nh0d'yi iTըUxUzPV.uZE90 a'sd٭ّѭ=N@% 9Ph-8L OEȴj/AF +bTJZF *!`4ʫB8vRkߓb<|;"s[2ۂc5#&Ǐ]=i\S2}uz36RXk7AKڸ7TY.KgGrhi$%njkMj[v|\mjk%簓 "S$jC<>άR 6:$n%ƅH['9hQEA G,;/jrNE('F8S 1kk"@Dt%gl;@=:-DBܖCI3z }6QNNkPy`B1_Q‹S=<7yg΍>LR^ƨG ̨ڹ?-|13>N\f8q [i^LR.u-%˽ @EˮҼZ}zx9SBM<Mws鮄tra- uR7^Loz/Yw0QOcx2FSl|o~_L!v& l^}O^8Ha"_!l>_/W/9խUArvÓ-Y2 eKΪZ"DYa[dmU{@U!lgi|T=r%J']N.#sTyLFi>&\l]F`)cq[EҊf6 iYnMП~/|]}=qKt w; ė<|'p ׬wpvYZIb(c&IVJ3b3BpJ3ٜXhO}<ʘA6H$p2"LJKNƻʼO8ז|ֶ5qh*xFKK=*#|Q k%'c_g-Nӛti@} ! ]|a*^]!|*g"7Ihu%^![8JlɌrɕ)3Z 7}Lew\*^z=$+Ik u9*;KNDMW(כ\os(}`ҏbϡ*.o_ݨP{*og8]߆^T >W nYl:yZgK!d}Yv]bB-q鋺#365֭c<'`:NƉ>&qpͶ#B D2BwFVd2]}a B XkcK4)}вe@ɧ;՝ȮfEPɀ 5seUl4rF$!vdP2Nr9-%^!gTrm$u/(eisY'ppϥބh$%%ğC?ݬ2TWe~YC܄,0jJضQFžojZKr1)kЁz('`z՜HayMI"Z.'aLΊ波nЉ:un ػ;*] KEU0MOC)N^CգPTU~h['|Z)k(A M Y1ʬڇw+ 'l]E&|^{R~wsO١6[_k9uwy~cl:?bOK4}L)SZmY?6':@&UZc) wɕQ/d qo!:z_v0 *d28ċD8Få"T8T4(sh <0I7I+  2*̟+lؖmh[Auu&6Zk[؆gY\%:SaF:E|Iv%Ǵ4A#Qi>1Б7Pz-upXc~~b`ORz}̬H]M~Ln\*Kܺ&S $8a\@utn8U2OkXxŃkeqt\ umCc1ޙrxwrԀ`ݲnY-{'LNnJ\؏ā.=8䲊:LnntfU"#*ހs&xCc)ސЋ7d)7< PIn7wy͆*ՐODҘJa)WpB{)xW8."x)div3 3x<⸪9#JD5l m 5qGVᬗ0D&.{{ ݲ[Lc)켫za.TNhP?b4<8=(|dI#BaN#bI+)i$>Gk㌖ z]O2L %$!'fF™b T9C;fNI@82B*=w_Ĝ&T0Up M§:G:@ S0k &X0DzSeM@< tU):ˌOpKR5"R2 9.*F*#6vX.5)8C2zi5vqsR~/5y7plܤ?.U(ҴɛUJ9mU}rf<5WYUY-׏S32"Mgk _9N QNC_Uׯǁ WTS|ťqk|8&4_ _g;YT\ mͫM#p~f^ׁ/l k_q)/Vhֺ6/\[!:]wea0p}U;;,#d)R4מo]>ToD]tcO5$.aT (B5 G0b&JXB>sPmgEc!zm|VWHXߋ !Q1HE>?h1srSsӛE]to?~8~ȿ?|w;ӷ}{w# G` ~Qh~}Gp뿿]]s#6z6+|~o.-OmqŞX/ xwU7Vz;aQŹ٭zY/ U,+(vި[Rm]*8YH.b !mMgU 7[ShݪޯxRC=3*)_4mk΁>6wyg.1yg o*Y!+F?E'Ni>t*P ӀeL22 )yb$: a(w{ͷF? oUܫz-CLGRY)p>'U^-@s !ZqxryKתl[lz٣oK֔kǭ=OH!![\UP͂*[m_S7b/C՛e?WP&Ε] b9sP..'^5,㼞q\~`w'GuCoyuvkNVdx pL.Irƣ46j*OocbOs- %,qJNKRp/z"T.V}98;4k \mBH$M)a4%,f:"x" jMO:[ճcN{X?{Wƍ /kŸq$.EaNLg4e5nl(]&E(rȶQ@"i Pg{ (h,),ШD!Ҧ37iآrGy\| ܘl3mOd̴9w|xkӲbuUe}*d4IyU>r쉚vp<&?oKF%"cx* *&A)@d*$e,Q".o0ѹ/&5hAZmo! "4}PSp& wtUr&sׯH3}XfyzItu'7Ÿ(NH 1YpE,*x#FhlYt#,zJ)-MX Ȭ6\ ™8;/SH']8ڦĽ [$!_ba4;llECNphiSF' FPL1a~i ΦeFk˷Vd+1&EH)l JJMtHyt*!tTirL9-!) yQW$nLHEԩ @DEPnLr_"jhrӷRz5{DRe/R7Y`> V{4~,Ē"I HBXxabLuq׉E BFHSbԇ8B O9R6s8 z0W([IC:͏}ߣ:e9plYnpz!RRJA ܼv4 wr8aՇ74գHyom~H)'[U^ qCC̏;4[k+'j *a귿|ї(,ri}lf)НzYAˍrM\,UN+BXhb *4'i4f{2!K69X 0Uq 3ZsN!q 98Wn]e2\BWQrtQn{tD3& sڞ% 2Zn22fGWCW9i-+ Դ2DWUF s+iE͵/EL_Yuq~T\e+OkU_w `vER2uBg#W(f>rzԾ/~쿴_,t 9A? *!( fDQA,rJm.&#?eJD]AXI #y(K,DL-^b~N;h2i̒dO\qt)riu2IXɿ8( ;ddsr#J+/P׬t/X瑠|Bf0MOO^pZM^x^WKJ~B֜6b{Q<<'~>rʨ Ͳ2IZo?1f\T7Ùxf:lS_n_ JMl qPͨdۏR*y شD-;սws?%- Ӕ<ͧ8*ݍ/P*`0GoYĹ@'m/hdPZsFt:@"݁4!W͹ mu],8vRZ—4]xN(afT)$[F11sʁmP 9.S B*Aӗ|P{vrk8rga+}E%ҔP82mNUs)-|ޚ(熴'Gek| ZggJt>gcs \cRJXye_~_dX `rq!EkV̦cBp;B;QuӬpJ[t%NmҔΖF .PӊI. mʀo۔ָM+vw%k6$@꽮,3b3QDkfsQd15;u8K$fAn!uj  W;D'bJ0XUVUMS9 ::qsB):5*%|pC1"PMx{ d[V±MRt2~P6,ءj3KWX ee (Jn@Wm2j82`iZCWm=](uGWHWCeJE#I`T%|CwjweSndM/lЂWZ5t h-r%-Ed`ZD%Z|A"٩57]'u>" R6zn܅u(Te=,;r~CD9RR+< DPkeQ+ )JKLE Ým>g7\r-AG3PjcדO#;l)=_:k1Ldܛ@fz<,2o ,-oc&'Jd2 ,|{u|U1!Ws Z3pj<$dyC8R68SUFˎ2J.::CfhӦ֤=R.mcЕ@oEiS ў`+[CWNW]#]Pf(ga1ifSVE)Diw9/I:RhN3!1$V%+bXr\Q2# (PRPiaIQ!Țdn@1v*v$)hFORY^)s"EPa`0 Q>"R~)4^Ͽ*iޣg﨣kUwav)Fq^OFe/1}LޱpA(XggWSVσyيB&Oj)z-+|z{/G Iw/^]B?`]zwv'Wws=79i]HW7Kz=9Ɣ򭽻{[T+~@o5+tWmk{`_G4z^? ?ބLJ՛oQc4N~:tzr_?Wt2tCD>nU54JoN Z4|n(/tb:mmJ;~l\܅ro})8z/d/T[J}5^c# Ÿ0?~Voϛ4zA!Y>}0-}جhID471!Zuh7|K}s(o^ᙋ_䷻en^]:skf4dž!a:k$ѥPJ4e |b}0ՙԊRvp4=0νiS7OrUDPٔ~+GȗݙmA?4j l{vVUy%L򲩤l5c;S2"mPiu u\Zr9]xPu.Od[y v* vGxw;O+33%KpEgb7wdϨm]8~U :Md- <;/7DwKTZPG=SP MYUHqqUWqWV:i;4򈪄] '(+qN;MNNb^K 旓Ym;Z,rRhۯb[*y5^Ŗv@DzKu ,3#ں+K%^RBRC3ʒXW̕-RٷxJ쐹kseXu\YjiAK%cz1J9B1pF/*GtOr 7i*E-'QI[]J\艽kE.hJ9 xPh9 ]k| nӁ$F hᶡWcr!1g%+Ο;ʶ\_#-鐹茹ꊹjvsezs 萹_$с^R+ZZT>@@v\q[se՝  鶛+KeK{o^RNyW2W\b,\* zJ#t>h8pTi8a<bJ Mb*BX2GSQLf4%{%(JD)NВK+R[o-B~.[5Θ+K.LR+Zo,zs ͕RЭSVC -.-G+Kel*͕٦(t)43ڝc K.̥Km+K%yc#E -l`|܊L@J+O-W^󧵨M&w 3L3{7o0KR.nF+] AAfo~Fy H:Jc@.σODq]:J"ۙpAto;~MgQ`Kze5vrvȇ*Y,ǩw6 Mohq-XF7É14dNaRs]s xYisux';}x#;-0FeúvyNZG׭}cv&FIFKO/ Ntܕ;%ny:.YdOEX? (IBSlWR0 I&*N #R͕ 1 (XZn>d{ssPݜ☯<ͲfgVVmjy^m/?[|8{lTMM-֝h5]/)ZjipP_H L{[jSӇ *,1zm {vND' BHk( cB!+D='IH8!s)Q5JI4ؐ}]Bjg hɟWomqx۲+Ł@`q bF~D"HVQ8 2(46M%ׯ64xO߿|`@ƬEF47U`P f8W柫v`v1'j42;wl_;K>m[Dn}5 ! Eykg+yRaM#F~qbKEK[CmSo+O! MSjQk̨GTPbl,JJ{t/ZQX943'**xAErƨ,ɺWS0jcQf2?_<6[VQ%sW}(ta}=FQDžbL0㋲~b2wSJLM;Y81 J1F m;t,^r)Pg\wc$Tp⌕5r@Aµ_;Iq$ˁ^,} İ)DԄ)# ,T1J4Do1!B MhQ"{D(RK> ɳ w>+sB65:zcOјXQx X"ogRKLB (/hDa &(WQ*T"x"`"A%h`&¬)5?fE/B V]2{XF$׆?Ll w%=I4[3er>IQ;H[鸾3Zug2bJIaE3eDB R/K/SFI/o@, +nrӌ/k;]tA6*GOfaԎ%zW9P|QuvOOVQt~;S3.94i3XJ^BG.?q,'yuqlEhE8rV%u@[͈cq|t&(,X[͔E AJ3"2ݾ7#vco$dR8[(ɃJF4e$a ad`%PUtLϧ8|z4/77yf|a1׬h<j>3#2}bf i40TXjodϦrqîk}6TwcnڛC.=Ǿ%4IcSd,2W)SD$'I'f3)J&2i喯xfϯHsc0o?x7n_05JI-1)u/xG5} rlAh^PՕ>3ıGZAJw2墄̈́ڝpΙaԙ0r֙r6&J)$6 E)8[(pv$C܀MHW.ҕΓtDfL̿{lO9Z}SpPa 4CY{Z*~bB7*`M1cLjB57V#_>TQy>p)u!B'OW'dVk zXzUvƳo-˧3m Z`ӷLhɓa Bes4N?{1Ͱ<5/ B( E3e2QUE9~d'wr&q6_M T 8(-eN(YmмS4{Kp:y=VVHG~Y.LGq9!$_U8ysV~q= *E<%RiR;1))[xWdD;K[]h5i/!ATh' %t bZK"doADs@mBJ<ds@.Ov/I!~9>0ִosQ,Hͧ Ob7m=Z4;.dz@KƳnJ=}Pn: u ^hF2YGp4L&6\$[_^t2'2w.}ğǩI/ee K^KJ+hv~*Χer,yHJHP4՝OudjL{n ".P4sņbPwIQD]MvYٚ(T E"#APc?Fb[H!q:}:^(W ^Z`PNqR#vzI.*sK #^RA̓O0\XC0~p5\TJ~#*cX +Y9̮s}CV ;[}I@p#8b~]~_ #Pms8>F(V#O+\ƝTd/NXzY*°Q12I41i D< d#vy نɮe#@O4ԼFHF^6N3v]ݽ'*\Fz(Bz %fx6vn*C &%,8v2N9%T"<lg<oiZޣ* PTV<%)QM{8ڕ8ȃ/%k^VP U(Z7J|=>Ĥrs}qƘcyXt",?¥!Սlϫ2Q@,U0'\|B$/[NO6U1BEqRiWk3I6rK逥Z|F4mR'8Qe+U )-SWq$[b_I.hI1 31 !vZWx^[AY'zqHL<[[omy~ !ssb V`! /\(gUոd9Z< 9f^i 7Vk87=ç`kWpЗ3k\JT )@bL)ڧH`o, ~!Ī$M5n$DeFM`Ryl<f;k_ܫ`:P#h8ib84JLЉwlI .=Z}ɪ-'M6V1r³W!DK!,iCYH:Cz.oqlPo>g 1ʫ*NP4D$M)&A^0!*dX`+[2BWo֔8ULoo~`աoz8|xGpF6FqNͺ-JeϦ]s~Ӧ#g.go&:5i>]Q캾:3YIٶk6>"2tkٶk[#bT|'5Cx" */&Q~gTKz *c 9cCC %qG;}CƊ6FG+57<3 8|gk-Mky< -C0k] a7);1;=P':TB2Rp}hIby7[){s+aFL!z-ηb"#z]^ǵ=JHb}BiC[x`<xnFG ^vc;:f3>^s6@ fxaSAa !j*`E`Zҝxja'*5\Dwz@Սʱ16Lɿ6/ u8R&Ń{#5X+WEgGP;4%3UݿWa[?YBe*wR\BSo0m\@[oZ-YDbi/3ĩX!.5T*_1HxۨpɁaX$u*9)ɞ"ĚHH5b4\Xrn)D,^t Dm͂uR>J.+q clIA1Y3x_./j~af$B(4RV{AD{ <Mqx^1ݗij|I' 4w#˸5 @)³94sS`Ү*ϗM$eK"KS`#V>/؋k[#p h¥yqeH9;`ehrl_wVs7Ǥԅ`[2Ïfv힃?CD( 5ʦ@)ڋ yu$D&'ipRn|PPάi7f?j.Y0á бH*43yoV^#b yDT"Ĉ|cݕo'+k% ()e>_,gsQBOOx`1ET勉mL4 #7D +e|3VНPق!#c( i@٧Y@%v?PϦYc p n !Z+z[= X>]P͸NS?FŵMo9gT<CF?nn.h9ټyȂyMDPhmG Yf:@Pqt/Jwr}[ NL:m"2i$ce|YXG3!k;!$91(t3䜴ʎV}r/ {b=wK掃ooooE#M]3>zaANFiiؐDUxi7Dnx ʘ}"Lth1k156jqպҲ(y,A"9nY{' 0Y6{4q(k<sQ:Ȋr5֮|t7SG }N|ΕЁQ $ږ.޻t!d?CV'.;TCb~zIcߺ۵ QC7k0:|7 S?\ϼCiGn3W`atEi8D1έ Mk3eI9 m`TI 3;#?,k@@paMSCA  `+džr@0̰}$g?4 DpIKJ[!B*ܿ|buc3[ cGb M-m8wmg |}? ^2 ^Xc0QF29/[0閮aW Ihxc-R,6f5ZyiA['51_!(cI*c2N BU DY_V×#ܨ)zbݍsxaҀwAj\-X]XJYܔ5 _]j`kqa S5|,gѼ IQ0JQ>̛ZkmA\|BC%)c\{;B]Q/s CCZE  4b]_8)1J9%P UIȂ(zvw9ٹC~9c6MG%/&U@Ԝ_22K\ʥ2̋i/ס\ ;p} F 45II~eyh6:|HVUqnWځ($mN+ Bxd-N]zd3'f,kƄ V/%0v"At]dCI[cڸ>/,.Y }l6GV )u.=G^"B<`+@8W0tS$ 2y,tazc;:̜.Sȣ"CAcN*"ݎ5YVw.*o>QJ%z,r[VAv3em1/r4 -G:*3'(kŹETZ$FB986WtjK-5בLE¨geZ!R̐CiFk4<@wJNK%S_壝1ˌFls`b}β洔w#x4:@k[>PGcWVGTQ>%ܙ{B]sJu][@h du]@o ;"AtzvE021QWJ x.;`}],7DMt,tlLĆ*jߜ/ Wu(=ei-πi\VB9Xk$h"(PDqךf-VЇi1McO洕F)H;B=RkZDR*PQ q$!MS%06YAc}u>x(+c$}AJ`PJW4J1!01as4Fk|Oܣ=(N)LǮ@Ҡ>VلLl`Dj"bo{Әf J]YΩBX쁭ѕ-;6P(vKSS/` & g ;}`[z|n eosvdDuPAӞ[}ʚG6 S4>}TWHkޕ- $wu: j_OD/A)AuJP֘m`B 8%\r%eyins7U VJ2 pTU%p9Ǩ"(^/juG+RwAI8N0db- kڜ[:Ͼ;<ANRr>,4)QLB-zޫ67z8 aۻ$_UPHA S(Lқe2ʓuqQdo Ƃjb\?eٴ&0Uצ7l(eaELh>Ю9.$Sď#12vud06X-c\# [{_Ii Bv*f4HnOo` 㾣Oo 梬7[NOŠ0fX {ᮃgJmۀ  $(nBҀ)Q'rAwae분__7孛^.Oϋs,*`QY2XT-,5sewAO?|KI@y]YoI+B?bagi=z4[5{N : cɖ2)'hG]9iݹ4Tg\Һ좘)ShSYW?IcHS*U#r)Z$ma*S 6:d^9Gg# ©QB:B:B:B: i58Ok,8Kl 2AE}`&oeL^hL#&u% 5+IRqWrV.ZsguD9#9>4Jc-+ćwX,Hl Z'YK*ZYkf_[F0c5[W,N2Iƚ5œӫ*[˨#o? ^]? my9O1`M@I5*m ƚM#+*!So}4pndd37rnpB̡hAq X"RH 4ڻg\Hd+_O &l(~8w9z|O[NI;7u,F#IP+\a^ vg!à{f!f|8p =RpxM`5ШC 'hx0b_; LҷU7"(u"<{Ft]@cîWxKXIgP ^fBS+t`d*m\䰸G>gQIi.[+8M6iX34eȿۛ+3G̱hg!9R [z3{#`$TPG2sLCh@%ͮK1P9f~sJ*Atp = tx=ʮ#e_FI&1@esSq+M'-޲Q-޲Q_:M9\UZ%ʳ/;W";]b8팶U:KILpq LוX3,en!BВ fƆޕPޕ]@F~2<s qv~X,jWk5k4K<6x^[f ʓ l.h!7jĹP=E* 5#PhfX\_]gbR,ܷ|KqS6"@:*ĭG2"i/=j{M>$΃Jbmeo/nLwKJ+JQJ9z;D?ډz ?1b#C't >BM Q=*p48|@Oͪ8%WsshG '`>z(IX {`K##h~h)o!>ZĶ6\[ɽFw_nU\1#%ܸT°E97sIDRFH# #Gxvm0PҎd,G0 u!@-'\KAT*Z z-$Hv⦤^yr1mXd55tVƊ:80i# 69Hd(>zNSY l."ډzk>dsܿgheB4X d?mՒMߊQtɻbp.X_28[悑8.) v ;B% `ń*ί6 j/"L |OjY;<Ǖx2_W߮`^˟s[{UqAs4_G,߮޽oWv%鳺mZYvsۿn%oWa*C\}%β!vL*tZ]__:̀ɌZ[$u79i`k6N s-_ն(\2$ h/? 6Pd ( nI&e?$ >vYBm CݘF9~ym%fRi R!r)hb\^s}VQ/[1ּO'8x٥q\\ڳ @Rwu2_ʅ 4.Z3:n)s‘#\P$fHB 4?e̓)PDۊE-[H_]N؜oY1'>2F #)l-sr~Ѽ(Q:%XMg[0ǷMjQ]/ tpbB"0 |<5@J Rknꄲ:u#n CUϋL{`\.~p;. ֠:"$X O+ka)cC<&s."E>nIˀ>EZմ픾C7|&IwMO:"N/ZL+*Imqթ!6፾G1=LFYuE[]w>mncH;*T!@ 5΃l6R"s=85*G? ՉG|ȬNr Q#"QfwE7 afqL΂z[ ɥΆaZS1ٽw]E' 6ݶpܦmE)h\s'w s|AZp$+עE[~ `gmWrX#PIp}Mɠ5'RVM "}4z-. !z:Wv>E^B44uLon}WԒЬ!iq\nh|t@M<"¢.S/!hbWL@ajnȀJ\̗w1hէ^VL`yBKZM^ZJh|2lnPʜJ9ԧrag8!%<(-Q 4.L!HYb՝52C H8-$Đ}+X ?M'm $%sDVJ0lCjm;e;Z Tvh^XMi}ކjK@aD0[FyxH*tfY-1rq c֑3ԒG ;xe%)s j1cr|I"v;XQ(zHؠ/y/ {6V׳F8"i'Y k6{֢(bY |g ֽTON0q Bw3?q':;ڴ iYDҫOXcTc)3@TbܵR+7-:`FSdBsɆE5Шd #u+m eCndf UԶfjTY3hHr&(9 Ip%y=>'|3>B-UΨU5T98XM'g`X EQBjl2% _0I cQGi(RZ HM!zu!vH6 $eq:Qbwl7q|y[ݲo.۞,ΕZ>wK]J]źy|6]Ǘ17_gըl?3H=s>R ۽l0%i~잤_q2ȧ7`fߕ/_=q>K~bc2I5aUk_7J2޲II.8 zl޾ ͵> |%:nO6tul,Aɓ7l~ `y<߸Td`[/Uۄz'G6 grϠY:.2JS X(1f{PJppj`a?'RbL~$> >`:\!XS:oNl :1(s}Ds4ɋ,^.:=  3 e_?XYb [D l-s};m>Y`4[i0U# cJcޟ?~3NRD2_kW a E<ߺN8Ѷ{HM= WsLq@ zBx#Ytȵ{>W>N&O'ܷv;sQ|6GN.ٰ)Fa1Z.($cW9K׵Ÿg/bCQ:z |-v]Oz6ݵ5kPv`-=nxSڪKoŗԗ d>Wb^4ZjzNF8잀8D& s"vg.B\7M- HBȩ> -'rnk`O?x]?!2Ĥʉ4Vф!l^Ls im0lK%&0"y1P~|?C=@Q#Lz=`q QDpYgh0w`+|L7&wW!~?E?s%+2|]~ϿZ [x4.Mm şX}>Nliv>$_e9}Q,ZkN? ?FfS4N}usXu8hGogo[ չQǞ$u8䬓$[ȆsHrzn()5eyߞj4R1#>eWl*/ݽ< ,}*m*eLF bqd4 qD{r~q/r|imxƶ<0'3J"IbcV#h*¯DGWgvrJW}ʞJtUPdzM1pI,u8D{w%lҤ0uA>Q u%\4I%Ž %GNqB%VSjTgZ!Q6~T 讅 ͒y2d*mX^]]kڲ䘘 cj17Д f?w# Y[Q*ѱ2yfiYTGOuTu-ER ٷ4qxYt "5wFWᒖ[_L/N5w˴B§DG{T,dNxsm% [|{;ez"!ބ s՟޸>f\Wzksظ m ޵_^m7p%:0-\~6[,WC5̒ѳ8;Hbd0weԦ`w)74t)L(̞VWTSÐ:T'mA*湗Q$h-(4ۑCUIG8AV-05RPdL ; 6@NEJS1D璛73[O+9a9T q#<w_i}s%t(Q=‹gKS ~EN03<I W0J2e EQs_4#RGmy ʟRV=o!*oBlwӳћ7e1 S@1[PNmg*qc0_XA8w\DE|Z*ç'r/%g0{k,q+!&(kj1%/8JKW !k3;f^;ZW9e.D%,8|k])6[B!cj`5_ }݂E*?dTkxxhp%!v+'RA̙޶Zyd)q3JVjo7aJ0븭TɳdJ/n J(rP ,VL[+k<ՌuW\+0@/i1䣵jA秨#'I:pXYbn, T!M9%]PMt|cq#WYy?!\I.iUPi s*G )r^US:tN4JH4|eVAiegup$˖8 Ef_D1/Z`UꛇAōl(A+AAuӯM zƋe{9#1ҴK}pӪZPAWx*.Cy2-L&ՊRU$Mߌ܃s hr%G,Uu9&&plxh hUڪ[P[3BQ0UrPwr݉ l0¹T?^!2EmE_s"3áDGl$4|Vcpeg8^ÞmBj8( npPD³%lQrחMO$H&6ur;e }vz/RSq%䌽VE5FU1 ] 5"`ˠ0kH92^UYpUP4,]$<|4'qT[=4 Wυ *ň4KRdEDjLdh%ss= G'436xJULȮH;$ژfӗ2o"ĩh#\!s_EQ&Tūmr7EoaZ"[ֈ[6u@eWb+ECu #WD&tӑ7m6P7Iy%G1LȬ7銵, WWzc{ 5k.G^YYiqzE)OGXxoaz%t9#WI7p%g}n6* n%iqo(ۘ~m(@7AV ̕n:lσBˆG8FOj9eu^,HSOZ*Bs3Gt:3df`izpz2fR\jzy𼆨óӰ?Og5H:/l4VKxxߜt ${F4w]ΎZy)8FU*ܞ?ݡZKx_׺4Y낕nŗU}~/XW{rJwE7 A][P8\u j)r_ 1!JObMʝu^ӈh:Km!=_8Ok9J]yjNjHG[<2rH_ES _bE BK E 8D-CWQ $"T0yyJ}7]!bIŮTu3@j'}WzsН9Y`dxx!]IgY?{Zu|ZM_:20H3^krc[Ԉ1+`ѩ  ' E%~$*NĘ䌿l6Xt?Foek)F8,w#' \dAׯp )z]7fԿj\\?x >'f!Z.v|:h0w`F)*Ulg8^qџT` .x|]_-,'8~~u{ML^ǫUܹwg߆ܶ] {>F{|ZvYk5Dv}oѲNcd1,DtKîQ Dq{4s#S/joky5$&j.V\ýLcM'['8+a6^T 㰾Em6@ya0F*7[|][ejjA 61ש!W#<41[)Z,x*M%УeWGG[~r$$݉SC0:^\q^ ']Yo#9+veG*Fc3X x(vb+-)mʙ)rv fE09.8JUė#ċ$C@nPs`MlSKF L1$#DfAAg1;+j4HR Sd˔^(Q{R7!hu~Nf,M =E59ͬOS[nG` v6pdv=^FӬNTF=)xX)YܜUA .8MP~R*,N2AX!#c.5ib K*tԢcbPCĪ03}?K,X ފ:qA7dO:N\LRͻIjw /ۣ;tr"zC=.ө9x~QA蹮{TIFw20-l4'1;]nY8_cJj[sC /.tt{:nxT: {0Hv}k_wZQ@}/|ɂ%SWw-fu÷b,{h'O%%*΋H,I&`CQC9]թj rgԀşTTt[h|-u^oU%-?~kr\%yŇoCGk=Yk{s1G=.ࣔSދ^5F*L||>X}Kƛh+u/Wiv{5Y½wۼ]+%{w3QBA7!yFZahI7f 43OmݕDŦ2*W35d-4LsEkk0Ʒǻm8-Bmd&'|5 \-4>V:9p(!CŦUGGi>o<6O:l0c|ƚF +o&%kSн'>eF8qc6S{AT x뱉JE0fi_5\ tS5 GnZG>M=;fHtǧĊ/;O9syǕ&SxEZh[5I97^8UI[h|)LY;)8UZpmJEm>M*E`!6=~pO99$.RIj"2V B<xr4o0C-(cQL.s 4^Ʉ-h [sh |&%Յ8akZiV93i|cC+ueeGc3O@sx0x:Ӌz<YJ@atDgI-hilCdMofӛ}Pz`=Mm Ec40/U4FH|CY9xX}rOֈJK^X$ Ujg.ą`DΑ| 54[t*dAY1n&=Gn[hlm CG7ioAr:>yF[{ccZB 4Hb4eDJJwWeYAQD فiyP[y(H!ݻ;OizdfO7c᯹_p!^07Ft7Eq1;ApŐ1K .5r_#cDw~9/kc{'k5(b~`@< } /p2 hɤƽ<@WPAb'GPkK 'AU4u^ (qDެx$4j/brS܁Za{G!a@Ij ؽHvBҧ<1c(Qbq$hL.x\}Gظs%tjZIw'g\N~^\qb LLo)%ҁ.Lr=Ge>`Fq3)s5T_/(aF,3D딐:f)] ~)42 ߤ$͊Ϝ%Z{믦Kj?sY#4f1.yv~NYj`F-`;a^hcp_[i7l'(;gNh]B4V=so4M顁fூ Umv/sr!,׸-9J.sbpW--[ڞ8< 8p^HLh:;ܬLGd$2Y<ɣĄyH gkRG\;9/2{O@b'͢b{ի"@Cl:bixz`6s Cm^1 fIZgkJe󦲆+b35ʀ`o/QC]n,FRD qqC>3ZFѥ02Tx! 3.0QӦSa4{r< WUaں.SfTskF<\Cϓ5JCJI"&&UA2mt w4ُ!)2 \ﯠ\ :F0 0U*q7կKoaL#B+h:h(TQF-H x/LsWt`)aV^;Nt谯kWuxo-ޕjRk C4317㦫ԛd f2 VF?4 } {j 4_f?_d\hk/\u0YŐ[2B>X~zTxi񹆽(W 0lF0zthWmEr('CX/8;B؏g\I<@3% 3+薘1`L:xFICC_\mTfYQoU5׶Go5F]QI_+<d_@A>B9'}k x',S-tIMiXe w#MމgwSy@,նGMY[.T5֔<ըF4"`R!GQ'znB2bFcf&pj}H7~aA ȥ{|*dzLIXS+Ɉe>Cf"[A yqFpZ+C'HZ$ۃ`4BPGK[\Rz7.#*) ;p0#(̝u }Њ;hzn\*0.$D#*.gZkPGc(r8h,뤙aZ&dJqs4,}څDO"|FdztX^U ]^ߎ{3-E?'G T^̑-H$=AľNx)YHs/2@EqM~p :o6KI/NfNVqEҕA> /Xq "tz eƂೠ9A0QZz܆0NH|9 QpbZwJf2bmP%!;Zc},&Eϋ9thm}2:IEDR P(̓%lPAXl\Wk7w ڊ`y{qñUw1FAپ$ziz7@$c؟&m8+ʥLɉ26mu qi9bbV&)JI:)Znq!Ztv6G#!,Gt]7ke52/WS%ruc5dxpm$%S N.ǓԯW$3`Oȏ'?O!]ucY"0Vuh@ ޗd=BpnwE v@wE2GzZ(;囄V΀&bV|Qƛ!(#8 O+eͽⅎdˮbXKQؐa9Y(ny6fD>VFÝ`73Jv#=q#ڢCC-P bJ -*W-\M#?P,7;@OOP¾*}iƆK!@ ǵtjhD0}Sc1DMivf4Zy0&MU3J^ֳ͢(s+2}0ze?>#̓< ,̳1Lo/OpJHZHO-PR$(6ЏFS 1M|j:A!z?ԪJUh,}ۦƞ&UӢ?ۤyfҳ~9Z1uR$?-!nN 0NtġT g|ÿZ%UAͤǣ|msv\G$8L%Ɏ";r9Jh1dMdx2I% Q":%IT[\2s1ൕҠˎ}l|k=KZ\JflEhMA gSKď߷YKqI=Bs }r]i*IGDw(eoj;VRI`m`Z`.2v|Vu*V!FCxYk9)*0EGFӕAq`qb&TtfYI*n3W`u&`MU7k "jO`݇I"[g#ݥ{U9Vs[olOV?Vy>&X4(ITZA 0җKrsi+ k U:W3x0K'JM5N*X|B6;4yڞ;Ɵ:"5"7 0kۆj|-JT;*LFNG AUϱ2c!ɠU<ޟ#Q j?0YW2&q`nm=v^j_ #\"k5 E:R]D;T tjkG;ŷ=zݑx9b/VϿXk=OzRp#q%$ V 5 hSy䜐a3^u ,^˲G̅F4;u3N3BNvxA94;M{MYN Q>6jh<PЮ21kBІ/Ox@? /eV@sVj2[!!'- c&j0IaL88`-$(1'\im%.B).!+RVe.EeBAh l;QXmڛv#D`/1=|wuIgBZO :s-` FDL-pz-ݝ6Mh!zǟ"?GhTQ7l,C H y9&($}T]A8Yntuل8'1\JOڊ}5GA&kJQX$+u3*•njJo@Ro kpbqhu|:9';M)Z3 %{>cP%A)Jd)BfP`B$$ѠCQkT֐1E=H׏R ^dN h]#)uvO+ y=Tɒ (]JI>9I>UB~0M8DY̲1St:!ɾ_~"SG/u5=^lNd=%UG\̌Ԇoo) QLv2Q`W5V]8lzɳa09v i1qH$J`0% .'esR paiEp!3%>ė>Ѭ׺Tk*:PL^AdNXб4pB(.@䖳zKtZK7<f4B2jYZvcfd=ɜ27Y³)P{e#`ƃO/W Xb7<d"oFɮ{Z۳&pxHFtn8<Hϣt(˵\a8<:FܫAv3g]<7舶VgYZb*Z'ّ9HNIA͆i#bq5#ƵgD0&P ("jt3ڑ:@{L;{lMR8BΠK &&?F+vSm^Td@^z h5ºu !,\gU}c.p#29,8A<9gc0Zz1u0omM*Ry99{ټmyxjxo'}P@KmOs]4Rm<ڳ6'÷e ޼|SqwLh3/˲@/.O6a;ܘ ֒tEbg1Tr8_~˴Ux/[1Y"P i8_^%4jwR9`쩙xAp4ڰxvlurvƴjr tоKhmIv7Y  b'<۩6,1/2\>31R_[\ٞso04ҮO_>67 wO@CyQP E+M8;"+i׺v}5h:H%D;?R/ٽAB[6t;ͯܧ1#s+hm*f>j0v% d :XxW/|HL&M$gF_r?bЉ*I'ל7_%vh+(c^l1B viFece9Da{ƄX-MPj6"IKT9L UW%/_"\ Qp xRAUMj{7ijV]ȖE1l}߹Kᎂmy?&"Fŭ~N:|׿W ^9QŏsKM֗<ĂnyAɊQDӓn>\˷;>CeHo3F&(?F> Td￶Șr`H茙zC_ 0"3f=Ifi6^BK9䨭ͥE.:cfmʘ6979>~[)r7$t$RILonjfc=lNư+߸(ߓ'j `Wvv@"GY_S˯?pAqTs-ڋM^^ǢUᮃ#ڪcXϚ.CDڥ5և·8Վ/m/sx8>t=Kk_NTeٶPQ^rrpn7=$ {8s;FStA]TXʾ'&8k@Ep';3m?pLw;}~2ӁB;}ťBol&epSwض+ %Ǡ:%j߲6Zc񗙢!RiE]a#sgtY^ꏟ**l/GLa 1dgyyj(F3uN!J3lg-*_@`Yk4(}1 a$ >[d9qƶs+ |.CcN~ c1^dWRR352,eRIڐQ:C/IOKB\flulAb߷+zYy T!S!PP 3FVM؋dѻG`3;H憺 Z/Hm-zwlyܹ*<&kvj):,fU9gr,t٤zkm3s'+xBi˯/yu:RR/19TvhܖAhֹ VYN#y%'N59l }!:D%!:YRJcMҵe?0 ѽK>*q<3a1HҠ=Ǯ>~ww݅cú}TPY檃8 <]p(\{m/ɹ3P4.=rGu'y֑0glnĥQ\p˱  b"(@K}l)smm(Vƹ)@ S\dΜJ>5=o,G}9@C{Jrgǧҏ{Їh|pyw}nfPQ@X4j[9g/ ܱҺTtCCV335G!D-(qdmFgֻOۧLMVB#)a )Bq;/v<{ 9.;Md & ?'9=_a%A>FFN%ZRf1vT`}ݡ̥6([dqdR;.X?I ?]ٰ0^~ ^\ y":5Ƚ*3O|\spK}q/$d6oj#OOܰX.h4J4zunS-ǜA8ݦlѭZ.:f_#F PޒZVg/\ir ߋA-&R89Ss3Pyڀ:閰$~.y e,N\w- hP% Y#DQ*Z2zьq `f <2je&SgD?^ўM䢿|g-Ls ܥPηudgFe֕ZkL΂fu2\#SZ).K pBb:F#04Mq ޴efмOu[/;Obp(taWmj?:on뫫٤[7W%e9HUpyksEG TZiWRsl9cjMN|HhR5=RmbESn[ ÉJ }vV\R,UyNNxg9pmϬwu9u'S˅0]Wy=9Fftջk_~}є%;xgt&RawXv>"8l\Ob@!tJ&[u|#fr5Y_4RuiJPΓ]lD*_^%$ AW0u͊H4'=tw/3?1* L 1<:CFqgS*}UAλ CJԑܤl̾ %)Cq9~dvk>=HlqJ,yOd =ewTvw*Ÿ~ffFqb[L_G^5M:sy,P;r"j /iC[mu1Ĝ@I(H~UwؼR p"733,y%7ߘ%m |Q b'v#%Y//m{ђSϻ=5D^ך˿§rsH'I&6u~9kjɖs>B fcKdQZ!!y! g8ǿq,tZ_KkCߦ-HԾ:3p/dCT?^c4, 79V(:}(,m>&mLa@%ض:EgK\_eu4'(sckFơBrEt l8>-RЃg+)R{5R4rr9cr":tr{1~:vmKIL_($p3:`{`)'n);0˳.F1n?vͳ*P/;0+8‘n-9IWBI0ݑeWߣ '[m_s@Cm궣ݴEs4)eEy{yn~@K.t3g4\?<n9%$VC .K?Ο_'bƫ8 ̲O"돱٧k)yʪVVn6| r!*M4!U6ꠇG˼M_t2x8lw06H7(awٸ^xO#u8G>sVasN4 \xD|<--~uh_86ć͘#Yjh#R@ t8$ڭ9y=Z8e<.nvm("շ펑yR}˦2mg=W`8,܇)w׿$hbRM%K"u/5ba9Xk5{#Ԩ9ژ (;ճN!SNa [(T~ 5CxҖ+JټȺLi?QMH*4qKW6R.&uqF xEg?ZD&H()aH@)NC#ERgL[GM,̏pxOe##poKfL7ۯ׉yZrZ71Kf }J剄u/(K.y+VPGdM6􅰥q?Ͷr;jg[[YAu C8OEyHFqSt;(5φₓeMRwt(}==#9.O)cn0(V뷟YAIyڒTD.S $CVGGGs$Zmp0tmNlsf,#/Ʈ5Xyu]P^DY][y䔝,JaK<^G*UT > PD 1R/=y}f̹אu~sotҍ̓A>xvZjןl\f as熕C,|˾~p<[nS0dDߑ>2޸e_/IsQi4qmx7Q/9gϩ3Sph]>]e<oP]P~o{cQGx;]S9-W(!2"VӃ+ TJ0C$Z1PpXv-؂k\ˬ4bF6%A%c$}Ol93קG߷_&&7|SaKo^LB_\u|i凿1ڼyXl 1WjXḼfsKXnTŘҌx_R>]~CE+\\Dɮcf}cӔDĻ QGÊ>@duA ]]~HjB9T+"c3Ap9̻oEMKOԘ+jD'd&aDN#`71zEaZ+]cil ^N03lδ|5nx+?rŏׯ&Fɉnô߮y*f 7(+t#'_Q-hr`x u{`?ojh (gT>* ]|/.'Ď``{o{("V:ǥs;凕U];Ϸ8ƩSk4tJ&SDuoɉ{Rx9 `z=x'JMQasG)(xEy`wo L+.Qd,}X 6A?mMGl9N|-n[rZ MX/뷝4*kVaʊaiJmQB ϔlb9z&/d,̝uE;O02fQq@K]LȖof[jvqd9fxjvTƚXFzo8GY Lׇ/jٓ9ǹu^3_9AS1E_}ʎoyׇO;ɭ~M1sה]}eҘ}fԺuٷ L%a%($W`- y#gHϾRZlL;g11߳'șYtSU6MWfךc'{Zkx#[c 4XJ^wS9hQ V>p]=qFݢ3L:4;]ڵ7j7X_NFWw7rcnzO?܊//'kwenPԬˁ>zlO9둬5+feB5ݵG7{k08=yBw{},E,F9W;S|M0[Ā5xk0ͭQZ¡=)bvK{]8Йu12g &D$TJzk퍲Cw5}8ubъ1~v(Ĩ^U{ˢ \ă*df Qx[u^h} t[/oOͅ3K:/S` !eLUQ:u&]pssIFCH1w&D"͵̤ѕ njga2%l!Rh]!r&] l}f8">}hv[ؚhlHUmLlm-=ںE6IXf^4Rͽ l” 9ƔY&#%*xc8 #vY#I UA;ױj1b*(kB(uuV|+_{+۳К]3d.5"jk>Ъ>,ކD-#wGx^bd7*Ib0h;>4mDXRs-DAJg=9QDA=ԛ L옷]lɳsbtѲγZT/=٭V}X.zvit>zvUBѰ@O]ڵ7K,~;B՘||'yAFFHy͒EQ= ݨCwy; >$IIQ PX˼Ƴ6F;!=pG-)K4z䗢Cժ4`.C$t!% ̡0't{ֺ-< Bc2.{6)Er F{.!0 Mb̒ˬڢ3FɸZdxxQm^4{+y|ʙZeҋ7VVqV>+\ Zd`oD(U"m-||C ,4ޞ{iɋL%^q$Ďi[g%K۶@#yq y[@-1gg.BUP>,OR|`9 0V$bɽ*O䑚 `mk\3+o?MF&H:7XQtw_XZ‡Γ;4Y$\߄* ;kR1:F^_ݺ6=Rh2I^@^"-Dp)U'UXiPՇ`mD9t@+ծ%ѱK<\n魲JR_/ۈ:RCz}]oh{ŏ0/i(6NF6  q"k--KJUZe+ci^N7BAߞFZ/K8J Z|=6],*{zo{3C:Sr{~J+JL٦IA)IdL|.vݵ-oNnms$KjXf5ms9вm{%(j{<]kSw ^ݛK6XVOOSTCKzz},Fh;#q{֯[EK6JQ"mb6Z`vEƕkHXR\P sĉ"dy&@gHEeBY&$R_$6QZɁ2e9IEAZiL CUdd e3gzq_ e#QH؇AO< v }&qۉtzz}*ewIRIV jj *Vm!FM^Q3n c-f4ḛbkkvEhsdd(9kR"TKAVT25&t; >G2_+qz+JNO~~:Ǡ$Iw¥ Qʘ@w#d%z { A[5%|RZpAI9EmdK4l^M1B0PԘ`<\ۤZpdR#F,8VxPl,{ |8*uO*2.햄L]  MXG89 DBuvڝXg/=k:B9{-8yqMW6 >A*^Rp`y&<2 e9Q<R@ui;)oFP}Jr G (FǓާ^-GQ4Ij JZ×ny,.8.]BCi;~+Z|r(_n z΍seV_0eE4KpB*u|f)DqKҝs˗-VT,7(x7agzSyew_R7cÚҀё0^HU*@lHsl=㓵pʣ;ƻ๫qz|]Bl ˽叭"AȇTa_Ay)_9w?Vj v-A^ o"87xwm K'P/x/w {\/myBZƉ 9Vu8k^kM(2gP(3B3)!|sYnxaKҝmHBڎ~sg|4k|=~P1KE3{9~ [P ڟ>OS᎔3#K \OT' Q136b0l;yuY; y >KvM}dL}0mzP[U OTd6{heg׈ ?{yy9~wtHE4 ͆TЧyQD3hnC,^Ja~?[azr{U٧ӓX.&FNL,T>tQhcnzW{'nO^{#s1N8_MS s|ym/zf:f YY<5;7yk<ĝ3EThdBJ1Ff-/;"`iUx# ho8 ?Se-nܷrjʔxI%7J&'q\AYSMCcڜ\c#M'pR sl%Hn"A]ڔSh&×e3j^HZU,Z1"P{.cq̽H^yKv4ɟo0fiDTʦ)(?*αQhs 躸60ۺ(f4BIR!elX؀d c@_w9 ؿ\}ʿd^w-62h݊S(w,,KiE&L7|^ @J^i)_~f}gO/mO+?KveZX} bF V}s?%T#e`wj7)Ah5bUMbXRY`Q`BMmHfxGfT/*{Ɍ}Ōf.3Z[Q@ n9;F5?t֘[ `+JO^uFe(9ڄH}ԌiUݑ*Lr 9%%%TBK815lrI6[T*45&01O*~M"82)lQ -9Xp IlkQ3rjC(1Qy:p!_X,ݵEft~!aQV$N@Ȉ0oDވo^Ob74)1J(ĚDHdW<}6stzu +cEdjdݓA}"ɔ;YB^Sqo?&.O4郑d0?y' tȃ`K'/}C;q3wsG(̈́0UB#Js6l L}69(m>Ytbݠv/Eh!bٮQ;92Bv}YD=QPC_ðɬR5rdŲ$hYcē-)PT2:ǖsWS4k }Np.&+c%o%E#"IQwʹ^x>=Y\doBPAb `X 8u Yaئ~5z˽D0c[(dE7tHSUDvTU\8iS;nJyؚĚ]$?IqR%4 (W՝ !{PkD8:JTKral#D]FRoefg.[-? 4AJ!J 1af!#وoFAP)Q#vSű'cBn>BKʹGm{H.nW[5mVU&XjDtrynoin9((|RÍ7eW=[@d$ٍz~K3Ϻeʿ2N~Z9FeJ1 )UQ':`Z{c7Fk%+xZ(~L.-1=1_7_^yê^HdzP xދwZ;#zbӽKdlP#f UlCe 0O kI#OoIհ[.>120qWubTjǡ8'?jG^6 랯E_L݂IM_*6I@p hu F[P`p#摷e4H~XGYn)̈́0~t]jR :T@́||ޱHz:UaiS+fZAS- z:鷡 ?Bk¶[RY?j!ɡ+J,qdz."%Za26LHoTVtJS6-voM @ȭu޹RP#n7p t鼗olmG;O'Ǫ:]y{y[ۥ鶻4kn˷鬒ޝ[x" (Ynsz{j!nP͛U ҪJ+YU_.*c b4޺>wHrX -HFleop`73ڑe$'o[[RK<&Ldªb"xL:gbWNRTp$6V\  e36LfqO=d'sԺ9aK0AH~NWcݟue(6^%ye 3\Lʹr Մ[="DPΚqΈ ϝ͘Ǿ$>DZ!N(%OdCe!uj%l(;ِ ˭`L Yg`r7.5iljQ /% 97iLኘ}ѺQP5z卮Q|]lg&6;~~xn'j-B%'>loGUIS\~׌l0j{Bl?57m":  DUos)ȆO܅Y.7(!*Ŭ$$L] B$Q!,paIRY̤]! eͶŜx̲̾(o7K>x$N1.DPE<$NJ1eX|Nb7gGIxdtq8mpiޗ:|7M.#5mrрavFԱy0{&Opˆzn֬}Wo2p><9d?/PTejPGi` ~B?atgEmt _H F'$r}Kꠚs}2)h^+ >476M <7D/@)PK>B\Hy@ow}+Vk!id4Sm-!֦9Hv!o-cxt&*;1MDvPqU1 aA:A#Юٿi_:0TL|*jFhZEhw&cEM751vJvmoˇvF5A;y֎hty]Dv[= LC^.T{V )ʒFd]ER㗎P S)42؏VZDX#ڽF#HXɎƈxMCJ7͘+ňV$&;bcC7-y Ux ?-8.%Ƃ*~0i>%b*ۙAa(Oy=()1 %0ûk7M30 (bT48IXj%ChT7Z/Չk/;0H뵇e4/=;u"2qVRq鸶S%Z! I\qG'7k/CNq#/l^Sg"n7m5qͩxFʝfSxf`3kW('1RI 婖)\FUv qAV^}ƋkWf>sק38*,&pփ`hѐaX$&pb AhdV !/ה1"l`M 1W18;\r|8!d0Ѝ :gE2d',y#TgqT(Ei*gBbxſA)#)p}EϓTBhSL"-T਋[7\(, Y 5\]v xv/k)|6@g&%B$ݙ@;PDS)n)"L(4YwsƤ\|vMM+AiҡQh+Ɓ}M ƺ*Bv>,dw2wan%8djyfI{W~wšӿ< o 0jl_;)0ÇK;'܆$ž/70bz&w<ElM):iܼ~)N~ 4`@) w\eDO|/ 0\ H')TnHF), y|^?JƑu<i| AЧ36υ'I3E} '>LYv%isekW4W/<Ew{"^9Yr5uH.NN^zIYlmž"*]>|=-I G )[tvDφUh |ڲ ?ϰuKAزX%z ^j${OQI[tJ_FWKlֻp}͏vĦpM|{y8r`0 h)-c8ϼBJqsqI}H)H0 >HȽ Iɴ"e3:zO.ˣRrI4BL_%PIGqOPW ظ8IWK+:>J3?9~4PHΨg@ q2WBҌy<\BF=19u=8lW-(U27$.%e&t&s&8'YJdJ=N*)b 8t]ibZf1қqf> we."7iB _-׀xM1fy+jjpͮ_6W[̊9w~mYZoˇ|zޏF6pZ"4)~5O`*k{Bh? 6[@U&M)$os)Ǧ{jalHsDYrR:ɧ%N'x9םejsRvf@СTI==ʋxxO6q > QO~kqw 5T{_q[mI}ѷՕ]*^bLօ"m]8Gʾٓ/+}q1.VZ5~E:D>Q]{Zݝb%ooNy7fC!y;'S9kѰW %iN΢mpxv?߲ϡ> D>ӂ!MH|Q3ֹeyJLxIs(MVSZ&m/ nNKܵw]:'Wp%;|-99pf֩p3K(8Au(MΌL:` A^)%8htg, Kf3UIFIii%l=1|;􃟣 ١ˋ J€ 81K7&o:cVѧg!G"L Ga9n<"! Q1Pyb W$h"$AӲւ%̆ΜLdvtG^N8t71w4Z֪wHd!vyɅV=ksĵ閸6 0>E+ 2WP!7hIQ9V0j ;gQ+YM ϥe bCf; E\hL9/ޔǾ-itN%'t^臠*m6E8Z:UI6( (ʓb[;Q+ R'pX?f^܅ߓRJHH'.}yT+vq8~v)pE4lp'y|硓PnZlDTeYdyW(뙰i K"aYxYe!NaF^\.V-K:$-pWVdq$5]Hek<g8C9r.\+S!9!-\CvXJaSY\Sud\ItuƘ7 %0?C(x5 4r ba6(i$q:ǒTc#Ee2,Ff<5єQi7{Bxx!i3<0p 9x,ݻѤzmE\UA׵qpvgPBqUQ4WVUꌫ6lf`P7c `Y)jy\:YI?Rz xe(onys߽űz.S5!bmcz, M2|:Y"8 t]mR7׆S k} q`7 )`{ Ď}RK!LH#4cNpBOg% jNd?aWg ͦh 4DHd?|5U#CKYjpnyQSɖ39c+V? ! Yg۞H@5pSe`*5O-h]fk]UwxXJש? OՖxñu Wt<é[XџO4!3)HW$rbΟ]o 4t|J(.Xb& ) dHIIPJQT"u )"aJ8_b&,n.^]YI,9qſϲX{NLcSw&ۅ)a;3<;`^86/&YL')1p8։bS$ɳY%?Z4yOˌc׭ &erZ^®ƛyaVm~vA֡җ/˗݀[ܘە+&! 5D Q3EY'r|I.V>۽0@UKLKE3%V *Lz-Z-rszglKaucNBLK mo#0r5M'̆!xߕ{=ՠeB 3 'Ff#9?%o+",>]N6reې6b9v酳s 3=g!K 6m͖a79V)9 PwsDWBwqvX(@mls6y sm6R=S058hINQeCnUi+ljUi+}|/UzpujyE^=b-lkZX{փ(|m YKjH{[?qtU[7}0:B9º{8[S%x h˜߯FǷxEP ɹH:;IR!5Iuh% % Djh lz,FKU+1VcbX0Gvw*Ya֨VޢdTG)ƈJԸQvU#-GwSpJF=N? ~g ~hX=aSTsNZ)$Zn,%857J,k5$Vu*8ѧR*XCevϱRTc=Ag@ ȥ"{N2V>7tQ jHܰs>~7B[Hu"KMtb&UTb`729 /ss omw,] &PuV /NY;' ^ w'#%:զVyj߆CNc1MdW1-RvA])YٵGJ0Dj ́qnlw*CuLAH) akP*֟ޭh( n mg+EeۓRU@/xsPt:ѷ@ZŵW^觌e"M~/O w߿=8:apwo..Òe_]K׏d7fi׈g2\Q6 *1DbiĩsT9r(Z?\1u+nt $d` K 8ÆP1 E*)e0xe!}-_9ӥwsnT"0$HjGL,ˬO G4eTEI)I%ZF&^^ELp3VM+߄;._*i^R.Jvt8)gw.r J!`4J,KYA#hp8zLH.!MtkC?bKL _RљQ岤8]LN񶧍RuJJ|eL :,݋_*i`/nsZ߼mbQ=Qmdrh.0iU}``ac!1:>$F⎈R,(D #GʺChrSP=OVw'1ù >y.H`ʨfp'trpfCD>/MnK_z;L'Id:% uӹ5ͣ X7|{r}?zp>Y^yXh|?w~9_^]|0\npр`9@J.Rhe8ZYm^*z#At,N&X3k1c].*#?4G&G|]&V AY51s) ,$1ƀN=+ҪW?_p=M WĽ R''7Ό7@JLΕk^aUnc(iV"0jxL$QiHF,|>NQ6@^s/ZgF֤Pha7^EG㱷ZMhH婣7*CW7``"hf4M@7msA]H.4T<4Sя2̪E?G(ǒbNXhxdpE=7z,g7g1|AYn.f>ߦRqjrL;‡]!o'<  8tll÷/l4-nVj}-_OSw@ ­9hn6NwLsg @epBb:N/wϧwn>Ϭ0*yr;URh!3L`G _#\z09Mg3|_ eO~-xk ©!L*!7!jRmR!R0HnCaJ8}d4&fqY7Y4̏7>˪BCݞӐ ii*I8 t@"@氝A]c3O?#1L57i&_!(e:I N\;sM"iNlt}L3}4bý/BT+Ѷ||'|ׅ״ ԇ0C44_\&^XEQ'/ Sh~XlxbAùቑKt/#EZG|4fh.1;Btͅ@v2}PW-Z*- 谦]_(E}!<1]ر\5e>)η0_B -Ưվ|:IwGt"VnZ;BZE ` 6޺ͮcQ5r!(X`|Cg41) g16Bj4{x!>vnwmx[)OGRP!;ΰ蹩 Ǹ\[- x<ʜl)bf-|X5eZaqH< q/G)&JcG: `Lrv/7N@aTcFw^)揱fd*Uo7׳"ۓޭު`Jކ{en]KNzٴWL!tLsz{וQP}RP%.̀1ʧY$G/Ip){& &IrWͧ,e_˼Ż@DwBDq:aLrcpʮ2nno"E(-=|ח~ /a@1\X% 5D 0T0v c#ZQ60p#$"xy 5MI]bLs@ CQm T%&r<'P)n2xB3uޭ96>+ʯG5(.8 ;TF`Gnq.ǿ.H)tܳ@* |?93S;v.!Ϳ9[еށ^v_ ƞ^ w~UTbd#:G֟Uȇ|s ug;v WtPH1h7$Wf } g3N'y'||(_z 'NFqOxi| y0kݽٗzed9Ǯ#=23/1JB l_2222WȬ^`i03R![Z܆4Pf"&K^Ҕ'\Y!8ysr+Iy @ӆ.⪆!:z5ZmK[Wƍ0VgtSAF'Oa-nߺlhI:0 IILrDXSDX2M9'@ޯnx:jSˌFqeLrEs<%&˔ p +2eJ@Va,KNخfFX,SMc!XL"" a!))I+[[ZY$y#(v3R *1kgatg_0{?naHaXqY3fql(C1w0<ȨH aƍMl 7+H2N8rY\6_v!4٤l)z3#gbO?4'6=zجŪb=x.֤Y2DF47ABRЂrmeblZlFi):֝XosBT}S1P+4"Q5,~4G/Y2=* Q$ PpQ>6q"("K) ޢ޻PO$q5LcE k,-y^h s ESC3ḫ }Y ʹ_~΅=8"+\;r[:SK (ΡjPx{̈́^@ :tUM2 @$$ "6 CɀǬ9m5?)3iZ A߶VG = :beyX@D h9L 84V~òZVsKA<5|&3[рǼz4ܸD D]Esʢh*/jSZ: Ҫ&YE԰1k$ib|D\lp`aW^=Ò/ٱ%\!ݩl\`Ez?.hx&ctq6܂"I-Y&6 X u5kk N>"koRpz'?`:F 5g8TQYhbPt 4T'(lJ6dZ׬ղHi_x^?ފؼymaGM_)(Prrh~d)6ĝX6 1!+;2DAS-LcoMک(Cʎx$d֛#ۇlO"7E5}ƝՋjS 6]E{rJD9*ĪXVuU'@Vu3 udHBYB3Ϛ@!9b㓶&FLfdҎ4VûnRe4QNQQ꿾k,#~f2zB)ЉazF`;_KoysXGK1dH,Chš#G06Z>l!b n&5uȻJ5)T,\-%<RrtR3Z.B"S:IVB '-a 3+>ϲlBܖ uRi?}Û?SX ,^-O+c$5`Y9 W`Qu[G7k 4_-o/^cǖo\fo,DR?|t˦9a'RUaU<ê$izVq=x5"r&8 ̮SOِV8}f+Z_Z[~fKz7@qi{*R(av(\Q:u©&6}qғmdNe+S68Ӳ0T"pXQy?u6zre`#Y6]Lޒ&i=&IȀ*&k%;d  ~[+Bz BCnǛE筴JU2xiRJ1TE:RKzYveL i 8 ͘{ roQjveE>N_W;^1`2eFy4O4}\i0Uhʒ{]tV%;uזԖ'iN]xM޲ q誋wz+]7G;kaeXIƖm}7~ln ʲ!}fO{>ꛛw?yTф0\B 81om>Aס>9bI1kCjqJTXSZiD{=+=+Gד'hG7?2pC6D3Pf' ]2Hc)h R I(Elufl6u la Z Շl֢$up{X nYߊc7n@Ugॄt&\$.A:Α'l9^堥Q>x#w AϛcVD"ZYHeɚ~{nhyA*73Ӳ5F\+p׵nK_܄O(쇒8J}E@m-`MlT-n;$Q̇"C=_0hfer;x"@χ7!E+}v g$=U PZ36BԍLfK!\2!x!Bҡ41+yM]Cn+˞Zklt`i @@¯UDI+l]C',Y'Bg F,OZISG 3k6ICnU d,SlmaTG'\2YD:9dH1PlR@hN~76QUxos$]_~8YG?YS>n~*/.'oNxě߇˻$I[| ANݨ_/bv/~0˙g^xQp|:[0&oeߢI0֬~] G i\Ԋ]ػճ\J .޶\泏B%B&Ծ.Ĝ^x%nE3ĭ9m3 Hp9zF#t )4皴 S-U #{\o;*+4`IVxmNj5>ViRwN|tzm#dډ:zFG 9[dC59kvƂEV7vY5^eu(Z5 9mˈ@ lHKŹ|\^ev!dg8}0l^}$OgMN*꫕x}y]4!҃BKJXI8-j.P ]B,?Ъ.}@RVV燐>~C~da ,eNo.ظ-dյ;QYϲ~ \_V 䈜3 &1kg\jK k>!Coz}MZu9';-FV7E}7Λ/{yktN喦q]SI'UtcngQ8$mnQ2+K4+'>mh^|XeOcؼrrlF>P\!.ǓVɔ% F(?Fi^_hynC5g oI T/o+OzWޮ!l?_ڶ{Q`֡'Df\ޯBx]6,yԚyQiu sF6"I]نĦ6FWUKSKf(Uқ:D$K \&T}ԄqT|SKϬΩe&ǪAO3tN텍ը_OR/CHZ}:ΈӨ]d&3SzM&s5z^e2Ոמ{e1֚X$SJzM[呚HBh4(8`V4S{VIfY;(k{!Oi䐶 uŻ:z:lI nJ:$WW/&=xrfg4Z܏ a )mݑ+Lj赔5uClV6AoX%3NHlIySc}fj 5[5[nvPyκeL<|xX>i>(4ؑ8͇qSl\y_9 5ހPt $I̎bU-y]S* c ]ɏ)+t;Hl[}FwX_J6*Bo-N͈mU6.9(\s׏*?YӤpŸ4:iR(19sM=p#g2&(!37ru;  ޠ}=x*,ߡ0~ZdvaC)u ^ʠ,l,dQkuxmטZ)3ρPH棦TgW<ʾQ>ʼG g*[tpO d={8}oGC6̞ǀrpXia%L~<&Qkq8BX1*1Zç&^p<6H݉YT/Jo^]*νK*]mud!bD}?; ΂O]3 +©ѨP/q gYGkUu@>e Ҥdk#Ncwe=rH42/e2=v_v rWk}WvUe73U-6fZ-ff`0CF޸? TD T]CMe_[?>YT5[]Ocf\L J;Ϝ#h#J.7+ss!^󏓧kM@S;m6km6km|Rd(DpZF)yE(3YZJu NV R7~ uǧ:v_>ibAScGJ&`}Ԏ:cijGMԎm c Vivlͽ>SlWUlT-C/h٘̀Γ&tI&HG !6r6q2С$ O@Kg3(|1BRi KX(:-%QrW\AWUP.k&X4r+Ga\>~{ ٔ/JcyNj%`3w-I&p' @.^Czk9]YPjmu:3;۲ü-;۲ü0j6@21#pyfEoCO10^\N-lu55UshԛJ!5^bɁuج07jb[;Jl%%J!^;Iz"+ε #(u*s2eəĘhPkVL,v珗7ŠtSyUsB,i3LܝLM"_BZ90 sH>z.xpEi:빑Z Z4zוRHMRQVaYY9F+_2ޒ1pRY|;3` N9Be}' B"?e:PzO)1Jwrc)xIq;tL]#c>]g.>'\9XΥ`;K5)^_vWֽB'v^3Spր""Rf2bt"e/u9TM1_;>ieQԝ@ROrEЮsK#-NDҢʮTj 3{6>C)[F;6q6% )8ja@3vw,HЮCF.8,Z٣Jn+9|K{?[z*g 6d(af)0rޙUͻPq2|yr_Kth6C%Xd4#&b|!1P%1jFs )ӣX=R[ۋ_2r9u='JIb$ջ]!&ߗӇO\FOvo{y _7 5x̚`.xys3{YC+p?\XTP7rjD:8.ΐKk<8~9il^BtMK,'c13jQv}6TqfdϬq>*y~c4l>6Qf,=l@b2 ÂI4F\*>y $6hI exFØI|oqFˍ:3 dnT8ǃv"'XdQJrg:`to L5TF=YED}Q8.8چIRh\DRPYgV7FQms (a )/Y6A0t)kd *ԞT [:8iVBzc!zXTri~Yx8OeLbR*tvL2 LM HQ IPmGe*jovl (o@6BF =Sٚ k2puϵy@kQ f9kXz23z4J^uQ '~(zXTl ,bf*L$?F<-qT{{'*Q /{n"g#=]΁V7QN&.=9BE?1Bd#y&pk"L:bBuL 2h}~Ur"jU~>BWB7aPfcA]`s)5oJ1g^Wږ]ti]t֪Eד vf( z ϮP&i^hou]cju߸ˬyG/,naXbe Vݎ s.BĪݒ1W_R.2C:WR*N@l׾i:CPH[Xm`֘d{5FK;e7gI5ˇn8{uD^؉!n&KB[$ /s%\-t~`' wQ5Ěz~BC?asOC6`YCnϰ1C؃g3X.8PXwxV ^/1ql=i}JʑmVV|t}4dw:]2twĀ.2U7IIPСAӪC2 K\^,a, 6h3)ꄥ2H|@:EC&K#m&90c6y*w?nH_1;e~($M?(ʺ]ICɻlkU4ٵ,r7g Q s疔1 Z3]W|-i#W]<mMah#sHas#$[(G4aVG9"i (!E`92F5XS4 d@-#`9Ԝ4X@|ToyVk6K>,{Hƞ-j`3p ʱs+kD!&&4Y"u M1IԒZSׂ8 Q6+TM!J{ E94yJiZ8މn<,Xɤ# xg?j܎TD&Y}^d-:(^q.^3|Za"$HD$N9:![8Gֽ QgnޭMY|u2jXc3Y&jBϬE8՞?&&܊:L[b? {'gmfD00P-ES<4i̮u+ 0'ܫ+  t+e!ύPmDXH !I,s/1”`%'x,`К@뤤xpk"zw$="" H`TdWpA=NXj=)x{}\Oog ^\:OfN|S*=4ʗ ?(Mo>6~Xzij|}`g%Tū#c "@wUaq,5v6hV{Z!NRX_r|}ߧrFNJsHh-l$@h9P`tarFǩrICn [~-W;\)P#~ՔpAw5n+WM ua-x;3oSxی 3fTzhÌϵP|UF-0bhhjq8$RdTݩE(sxA=U`MaGysvF5ݞp[Zn;G|@ BK|57h d@0Q@,yֵ1D, 1!Ӎ%:O&=̑C vp mvSL=$POY~nxoc(30#<gDz? n(BkNH`"8f:KБ~U<g% "-e }nE o[;*dv8$19֐IJz#q"LIQ3 tVɐ,~VUO; ƨm9!;UNnLlH z"k!èԝ )ؐ**1|:D"2ϙ?:ubgoOXs98|1nO&D$ ,fUqJxS_US C}JqbU0;bI2eAa7:Y⦞+^xUJ}#AèW g\3EOP]R]xjJ,CKUC)㄂B5eUc0Ujʺ2 d` oru`̠̚yT2[?9!*eW =BO(uϭn)jG틃c01]Ex7$PLJ S:F0AQP(}#XXZD,zq742(s< z I>!J!@κN&a@9,TbPc;ZG (}P$AcyԢq-sYJÅQdZ1.X"Xc\W&Ċw]XuR B3Y?޾~wٹ{OI/sjN5º_>_'*U1I 3 JSID,y`H01LcnKaS=[ [ b@eWQ 7H3'O$uHciT1PԀ٤[1:#qԬVӼau5}/nw/ J|޹!F}jvfn-/yxrjxɻ'ĻDnvCz5ΤޤbuyqA,/nӛoN/'fn'^L[:;g`8|I׫V.ăL(3os"31O3uۊ|jPI)+AXO]=:<vP D/ic0ܞ:uHw@C+p=+#DH)3X3*&qbM@$i " B &IV&YJ; mwgS"s%nV5%ETXft4LҘX8v냐mDAHЍwњ'7,[Fۭ`ڱߕ*x@Z'ljV1 qV{ʼn ѱ4k@;Jڈ? aHjbc3CS;o uиd#eI܊@Y@@U7xR{~LJlS߀nuyXǍ1@H9h&Y"v/᷑D0 [6&>z;4R\>ܧ·!9FYn12Ko&jm 3+_ƺw)g?Gۏs=E'ѷx`[hT"b9nu$! EeTym!ץkr 'Tܚܰ$݇t!c R3V`օ(WY$n硥*t0֠QM^=YUb|F& 4krѷMP A|\Η8C :֪jej4 ^s CX/2a@&^_^*G;Ԡ1K l /9%CygC?nW0UBf9=7r;sM#7Y-J =Io=F#uz!༝M/0s,yx3~QuA_Ϊ)XtuGPe h] ic~}`;|:A4UwKMk{}}گ4\:`ؗϬ!lЦzbzF݃ 0(;i. 7/]!wdz+psZN mi {izB#-8< S~}}DQhDw{sw_DYk(cW^0ষTfKZ\N;,MZQWzUwُwn]rm:Qntx? &~ϗ_aJ0ڂ=j١EvuiL "C$k#Eq׀ "%\4w8BCYvj&C!YmĒt~<|vS J\;YXEd6,س hڦ>R{vA8Ou[K{Eg[=ڴԣJ= 1 h lJ ky%z*޿ *};g#=+^ un(@gV ^})Ex6(o gIIzf=>08;HžNS{ƽ (oVz7,˭!4# /[󫓶AApa3OPV_zm[[{iY yZ:ko8C#n eLP뵹KXȀnXmXBB?<:((}y-]7d#L DJEoi7&<ݡ7fcJOɀcS:9&xFAT̜&0ХJi8>0ep̹^F 6^+~̦ _tXiugDzxrl7KpzU{~Xw-zc w "<'h` ,c,HI0]#$^!2 d: j hiPtkc$8 օ'p"ֽ n\1+j1/j1G r.}5 i.h3n[.E?1ׯ~X~gk7h\}Xbn.@$*c,IAIl لcc %0%b,}97 `P"ր.2_1BE'ѷx`m`}qg XM쉼_j_=IBg|첧Wn"Ί0nL7 56T'H҄$XfT8c7 YHkM=^iGy~{^n/`c P Ab8I"%#Qv6brG74a-BLBcw4"́bّ-H`#%q<1A`O 3O fw꓍l.^\,ԙYoIxJZP%:5+S#7dMVd98 Y]>Ԝ$jvi].˅LS\M;OYӉ+~[3>ښNeL~^BmX<#2{U+t8Z(l?73h2^9(\ųyF25E|>Esu⶷kYRq}iZH\D]1mׯZ_!t-5BrGv{O h2R8zY.v>[b(‡Et7!rsl3sL(Mxh' 2es(RԗPDv%{/+#Pf;ݭL5TqJxl.i JbRф 9\ж=ˋ Be8iY1DJ$n'JTѱHaܨ9}os mm\XN_:ؒ}ޛ|vZ.kvࠨ\/J r>c Z }nf G<]ӛ2OfH.䴹L[EɒEd\.5[M ՊDH2k, O"6' , t18" kBZ% ,-bPZ H!Ey(Q'UQ#hf۩m> B*MPVhpM,J%qبP!Byk=J)T<"K_^l3 =cKNn qVaQW[h}fěP-<4 |3j!T)B,2fHKePXXE*C"&)Z\diQIjuɊa = B=lj- O?%xc䖢J?M5K@F֢&a)׍V \2h_=X'"6eqZ/0vV& ~^O3 |0Q#7.Px fɂYp;4tʌʌZKyDkPalc&Y,i#9bFvPIv^ȩDa$-VG?&:d&&&2ݭzquh]l8tC, 1yIAٛ+xkgUCM=\-KaޯV)v ;}ݸ*X?f=+"~mDc%7p}lq,|D#j0ΉR \߸M\AJaY*٩υiE:]yߺYyH壑FjJqίar9:+$r50 ]1m#%F|>]h~]8 ƨ>MLdg[Ivpx`<TaD=եۑ0CۙtLpj ZW*E!qޱv;2{e e@$9̛? ygX=qC${خwsmbF'5ʽOˉEҽvzq4o=,wSB뱡{M>Xr vW{ BDǯc71~=\O TC~Aͺ[ ᶡˊ gX(,= ˍ<n~͡tQǛv;?km n@4j~:ֳpn/ָɷ`<|H?Uˠ9 <ϛT'SrEeţ*@Mrd7킢]qͿއwL689ZD^4<xG5ޕ%z[d*YwѷLDB #<28ԱiIb(g61(edچV+&\F􄏉]:}Ecū':7CP|h4@3LP>[n:(ÿ^B/n1|soԂӘ?e e~):0eY?0G9bc$cL^4 L N2IMi?N|*z~4|=O/_=϶Ie]"*J|z7נri*t7o~}sb1|cňiX Fg_ҡoP_[,yzXc~ ثqh4tLv=2$f!L<'A4ͭMJkhcF94"kKmg`4pt_L?Һ ɩNɎbg3AEq -"%_J.,SMϴ, ӨWnzJF٩KT@al,R9{?f4cTw)҃t0l2 DOP9X';L2 3=|7^[2m0oi^t,ޚgY0ʖ l dcq$ZY?c yh㡍6xh㡍6ڴmZy[i|pfe>M+c#<6c# 6R.!#Db"Q+ez֫;ꪓN֫* ;h'5_fj+:|'6`p0ywf<.CR977 nexlʴ[Hti 9TtZL}z1eA ݧO<:?,~T־j ]}aaڐݳAĢ:P,ʭeqYaRYX&uHY,ZSY.i7OT6ۢ*\zN),NUZH kQDH\L8N?9|N1a.̧?Zw-ZWrBi8ҽ_,L/wЧD8 0̵zp/: Qۻ`CLGwѧ𓍳NQds0`noOZ;[elv~+'}X<2q`q C3_uyo?n9V=_}WS}r1G_"`:Qf'uw-yB񙐔sMv6TstqLG0)1Nv|}>В>{c'㩛ˠG^{sWk>X?%ƛL0Dgf}!_28~ 5NfAdd2[pvv_` T AZblL !FH"e"IȰ 0QdL+(,fIiD>/%@ge_9.9WATc-r:,#bB9ֆ2cF%Vsfnp1S-vpX:oA1<O eX|_5܍@]` ,|EŃ_܊~!L|Mb$ JT8MD'  &K Eu^l`dM1Sv2XjPp !.FN: Yb`Z!#{:AX , R:Fd8bTSXa RAJ`T8PWR  =y00p> aAW]bPDSδW'DmI4fwВ<6h;=gI,XsKtjL0X1cRPsOG*  69~1&{n61&ӱ;c*mic_J{Պ4cfcy,2؜71jl.{¬D] +<{g;1&dKNz;ӬͷeY,lEZ΂5.7}N(eϷPY ?|/3M%1;&[%ѥIJՒh+׎vk\6zBVdX_X7Aܑ-xtՐ9s2(&TE&(Xi%+1*ͷ]~ Ci6a!Qq80n8D*(2JY$ǨHʵUu;fS> 6ϳe;/}3[zf;MkB5!RA`1;ڂh!rHu!(A=~&4h $pH9Xꫣ*ݩ}vS]O-6Sq8fB{7)2u zI*ݢfoi@ί=VWQ wqU3Rm.ZZD驉4N.]`Dfζ# n{E}0FhgaUaxy\@[ 5 ]#ZWd>ゖ3(& >Ũ-dQ1#X&XX)E 1m!LqXˍ$wuQrc'8u}G#20h,@j ccB %51wZE ŌAmKmuB3:tk>CmS4Q;_jAGvʭCȟz=%Uk{WMn1wP{)Aw[7Ź W !>?5 B)U,KM,|="JIh_ ᬔ:Yi>i--I_o:ζcmhL1eha9#nUfk|mUo/~?^4x|],/]l`"};F5 tDVdqmGRZG+]*Ex z-@~wݹBwtwN`Jwj5z8fLP_ۿ0I(0BiP!V&hB&t+:5NH1Rk\ &ԝ( z Nq7n'x:~v(/kpyl6w׼D_ ZDE^5mgv G4GQV_꣗q2mjq /SU8P錙}zNU(=1)q ;N%9f8= D]ahvּ&ܹfw@\V%^;?Ϻ0o\ Y t Ʌž97&n$gm~^l>۶϶ml>۶϶mW>۶϶m'?B`rZN5[˹,xn܄u"3gfLzF0SVI!AqZB(}HVT6L(TPԕ8cWܳ9b9GQv쯝5לMe:j\FQ)Jw]9{s;ʽ|)=Mԍ ϝE}>,|1" JYbuB %b$Yc%6hDǡNb% 66 N#JzfǸ=E8YAcl>bJO  0Bk&@2rcZ9Xoxh㡍6xh㡍6-At(6CwQ'>-c#<6c# 6¥&.nj#Db"0`L|Elzo &uI'Ugom/3u\]Wn{ڃvݙ Kt_3`ᡫ)+ne }'5P}vh1}FIbƔ)lto<>!$1SYBȫ&tmiC:8tϺOr@( et,*KdQ` F.&W!eNKhh/KMAfjeR>RlrsgY:O[.+iEA|ҬnZ+ȸCuCuD#5"rSJXJ n9E( Mqj$o!>0A,i'="wm$W_!e74U670 d!}TKx ɚ`jbjݬ>(gXn6߯UhHp'?4\*\?qa޿%:Jo')Ɩ-wm*eQH}~cٖ3hԝĝ{ ݺkǿV{ BЌ87p?h9C?"5ǔZ Oc48A8+ZifxF/hbJkaI4RK)(DVwxߐoYhc{b9c~Hn?r]p;3hYbx+[J\ORowE5HQ8" 2A':&.O] OAkH4&*)1hdUDhDC`HR]|6ɝL}+j>`L'Gh)Iz~zA5 4QbjNtrD36I eh64NƏ e4$QdB4sm5AU§,NtˢcvJy,kpOZݿvȜ 6\^Z.o#.TD/6k#댑|vF)FPukg+[^osK(O=JN HdH<~ ^8R"ıCEIHEpIcubfK+a<&xOXY+"0n?4IHeѕqJE,4! ,&8a7$r-8^ç+w4'Bk(*P:1SD)OjBB|4v`22:4k ƦFT*44&Dֆots u3.p@@G%D8>,"Ee⨅i_NRHR> MA(D͆ƁVX+ $N]/C55|uY{;|$e\Z|~6oƓV J G+uDeIPO5aWY%<$e9ҿpk~Z|ySKRѯF*(cF23Zul,cby%NO:;>sJg"~<[ D0G\K5PLYTS(VyVќy(V5W euaͺ-s~əjXn͐^z+YH<:6~v2dvCܶel4 b[D\dlx%`؎Hd(bd~(мovvEr]+HPjX\W{3(N^mF|>n/* W6AbdZi/A8N(|c/."^KMQSmj iހv&`qZ(:iW]lM9!2l/ņ^26l2Zpxi20DL_rsLox׌ᕌM*j0+ BOC*)3X,܂d< m3A3;M֊ҜL(jp{w1H^V6TCh0w^I^)ЅApFlaIbwy0^{8Fv6NCS!|A ͻ{%H٣6Dm(o×G('-xWՃ' (*yQ8 ct'.ǃ6̘=@s'I:3Agy \f\y8\Ͻ.HF2X>$3|+lj? D)x!Ҍ Np~_h;߫k)BH Rq8ȓ' vybHY ^N#y"NhzV)UZWĥ[>/l3܅Bk~y18y 2n K׋(}?̗(sݹ'Ɯ+`ZWAShj&d*%#`4IjkrKd\zR0ܮ);- گjBs Ɛ-wF0L+u'/{sBDgKPM mG fz(WS. <*@ ŝ M;/Ē-ׄ(qVC2Abr`ɗr/JUQd1NB0i)뛉>x Պkmz^8%M|QJ iw QyK ږ [Q2X+S8o0q9WЌ?lۯ# V!ԡ:$T=BM~aRvN2/ @Dʿ,>ڨ8:Q tӛ$ ^ۍ7# Fqʽ}\J%:munK9ԋ<ŕ6ҹi Є2 6)2r8JN[P<#0]l4:Jd(.w/_VC/>ъVcWBI7ۖgVRӇʋ81~|e^@k9ZW|/q(DZD1-2*QS ģ-I[B/,T}6e%8Z8|6, ~fB (h\UY{ YhR^P Κw yu쁠.Kŗ&r]'%[g'H g OKd}; cry5cFn:e 0ryp< L%lk9ñ&[bq1KE p6K.kX_:;<.oc/@YɖJ=D+֍&XDQx e2VVŤj?׎EA/8IvLoٹ_?^RPjЅI@U> _>|H)eR78czo;^;Z]DŽ򜫷;O'j˃rN$julc'].S9w),㺜~va?}^Bgd3Aeߜ{{r-3w?aj2=?޿9# dNRwb"?eaMmZ8̫S|s0RI*E>sms^$Gn\iZ\l^y thO! 7_ƛ-Phhz'kA v|(A:G\DӱDVۭ_*V6Q4Dr|nUW=o5^vQC>ܕQf/~>5w.zkф;Z]woavհd[XvjƉ4NcT%B0k!8H>|j'ŀl3?QISn:9YW|ݳ f\u^#w7"!Mb ڵNI*;D&eRE:ޘ[݋WNm1SSuLmJD_jF^ 4m5RŒ}MMHv'S/rs)>I ء=n[lVQ3z-l69I>ӠEW.߸3v>Rxm6`=(0W8 D3Dh&4y.:tkEZް?Yjyۇ6 7F'CaA[4 Xe7>U69̘*Y&HRwQkC-:}9=.*&g;CRjqD靰l(KS"N(%wGC֡tӕFsQ"%cAP#8 P> i%S[N9-l_^|;oUv\ O4*JٽZGӼDgXx>Lu=xM}0y˳A+1tTG [F\bZIlӵeN̚ Zc"3JFԥK&JZ%[dRx₢S l`:*72@`SVDk?$ScY$XP'LE rX%#y,IXRű2c  Ay,XGۜ$ E6VCzIf&+v|?pE)Lq[[5S1OŁ^&"9:Gqf UZY*T&;-6.1<%DƏӶh*baNi"M4(U`RI,Nu61X8!.!,M4uyFvyW<}:x/; MQosǓ5"v+nAFؿ{Y̌xCPjϚQ/v+lN{͒Ѵu$3)}4?~=„bZi_VJ7uVOa4`X̭ms7)8\iQeStXT n eG5 bpby?[3K^Yw*x8Ih"|.4{Up!i-οEBy U;YV-E:(t4*n=yH9L` `#^O߹f\^PX`i^OB%>hם;d#~dqV9vFpšjmؔBDӆy÷*ww7cN,lf8뻛#Q3 ?H*Gh)2"ԢmѬl VyFи: @۰~ &ClpJm|J58!BafUN+T0Y9,7n=|oQW\+.rƾᔆ%#z4BNrrL./c%i#RCRQÖE%*e&´eE-i"Z<.0?_N@<Y&Ǡcꤗ{qd{q:* XvJ(NUDGj #>ƂV'bJJa.J0$$ +'ϔے;R % V*2:'%+cC.T?%;R,~4DjAҥ,vr|=od18r{m^ <";D(]~~E.>?)0??t0 ""DKP X`/&,G_o;3ȶ~?Mcc>^ |DS1\K}oPT~撍)VW0~J}bZP$Pq]nbxZvd )ԠHH%Zg/S2>1{~ Q"ZAz󞯘 au%Jr Z\3}0Jvt`~x3\IJ4cp3`Ӳ_R*qSJIR"NeK#)!ɬ䱦F"`4mQ_=5|P^i@{f=f vPI6)~Šʜn<Mf}\h@ 2ko%.֞a mFFD*E":qLWCCElQ)H֙)E)BBE45$hob3/!+Ǿ?1/kqbQ?;ʛk;-V(A]c0R} {Eev3#*)WW:|q-e&/l삤Bs慧Pt@~y,c3iy~ 差_)^1}LNw &ݻ,<k[&@d[tMdts>Vj!j}S:abGo4,ƖG]M0UwWggeښ6_ae+ rUaĕ۾)0ĊK$ʎϩ($bT, }AwOO׻>i vjе/f҉ޘ >;J6IG7o=J抝(RAT'޿|:{ LNVKxgM%MYEV}7?nn֗][u:y_Z=뤺/$*toD&Wl="{Q뵀nUZ=r.;4.XaŃ躬7hA$<7yP6PPg6uxL;{+" G>&Ye޸p0[#Dw8GЛkT 蕜/8k;."v4cN Ѱm79)*p$RZXd8'6', էC`*0@`6Esja]E+957N͏܁ NrN:.}|Ԣ2q@8qJJ7K:HyQܳ؜)ZU+Z,J]&SYX.cv')t~NoYoUS3vr{\\&za;0Ss%UNt>YI ݀ڬ~}seΗDIv:63{1qšKӟoWkVe/էp"aZD%ԗ .tk?؇r[ !DEkP"ͺepzP@.(*]ss[T~OCֵ?RZĵS" s03resa5\Td(eʘb(q_<fj.5wSCUf-R'yUL)WIa-Ͱ NCtV%M垻.Œ:J6k+67XEKM^9Q)pt].RDr, -uF4S#j T[˔L ѹ z9엞SKvJr̵z(n|6a][W7}:npLB xQ?ZO+ڿ W˖trIE"21jC^Y‘hiO={.A$GbWA [˓fEUpRSZ.IKןӻjP@[l>ClsqLmr%YZ$5V$-k˕C[*";a[ݮHswver5}^|$Haga!+ zPȸF[(r{5f\%ٯchQHIKACm$RǼ FumM aO)r'_=l3I}Rm8-,d"-sx /-3blEs!SK\"ILQat72bQ)Nb1koV)xV+Ul ۞Tˊt"5o嵽䪎+y3כkx;HTKrEjb*YZJn^-KO&o6M~QSa_CcjcnLYjNj2kΏ9.2kßuy13n@[@gO?-R犾]Mdhy闤8EU}(;iaT""})P(Dc]"@rVENʳgQsoiSN f0Qc/0Ugk SP0ݣλfck`RR}ct8X>14䕫:%&de@p]̛4J~xf:8̦R,( C6Q&O&XA 3Զ: LgL 0Kz°$7 s^^lYA[ hW@Zkd8푕#F8BLMAkSI{QGUNu?y溯NQtHo۹-.%HZV; ANB1adۛ3;X'HH i]k@1+o1baH#][$x U2c AR;]9Y&ej&f26#LY~"s%& 2QO{k{Vxi㕒xz~p3yS=^jKz-kIC/!g W~a6C"zs+1mCz~,dpH^'@{}xSޘ;NZ,_Ķ8ioRk Fv[${(?Pi?ģbl<)O0q&.HdbB’b@§yC2C{npu(7;5"% Z$Z<)0ZHؼY;:ͳ )S3Rh %F*ׂ#XVHQ/3ozuz.Inf2ryGq#ڰ/>VҙOQ4cwxj!kG #McUL Ze9=(rR0^XrEV Jd 0A"IE}]wdpUXfnQ46tuUb9Cx#?(E yi.6ͥæt4ulz ߀ %F`D Md aByaU&VJE*a$hNCޮ u y.zc&L3^#HyYe8nw nr{E- '5.]|8}Jgl\zxU3[}օxYC^Cd˗C̓x6E BZp3MW%Dn&`# gEQE-|)C@2E+FXW)r+Evx1x~s6)8s0l9 9+g@q>=SxUBq X¤@TDH479e``,GN4sYTլX.ޘ/|\)iү-I/vnY~Eo y-G?]3\wg <M?90!8OqRSW<}X}풣?<=[vKߞ\_&YOlrOɵĸqeD/&H*\fIBHV;ڊsM&g-x*$? 8箲Rj$. %iA,8bꦸOg61 ,} P <5ż]ybuT&([4‚SQ`]Rir,08ff҂)NT[R {wi@ߍ0#"+y*+ICk=DS-I2HBboRf@Jן%V>!RҲS=j5V~obg\Z_[Hd3z f)XYy]DV=F,$2~,ݷ“9b 2Ŵ3A`&aۈSR%xL\ZG`3 "Ź%/T0&EdeZ@EW?qR&W7Kzx+<:SF$NN6T+Zi?Mw B06_cg-2u'n77,cH~]*& *K44Iiji2fkw)܁nQ y-B$.ݥB\}z  )BRbb 12 yaiA)4+\bK;T9OaSQ@\!4FB0NzP.=/!Ϭ\lKaT=@ 9|j9mNNMq-vИRsqO {A9ӌKJQ,xK=su~𢅠*Yv,d 8wmz?J] M' X*bSWfHd٬s]LE vM<#%YBD9Jw6ayXk]/*Kܮ;&׌=a{C^Dnԁ5?ݼ5gC}uŲX/ݳ,uxjHÿä% yfaHI);98zhŇ9 kzd}6}xn1w nї΀`DvNZAX!u* BB*Dz)V5.\n_YEwl)ʇkLû1zp)nHUh|zʴ z6 4jM$;sl<%LŃ)4SMX\d)VݓkzV F$_>|8@i`,(:ϠE|2;@Qov}^?S|f$)k$Ou(<)"9#}mHFj}JuV9"!dJk$M#B왑ALոK ` SK7at(5!^=Gcso n~~/C rs%) |RsXTUn.,gJ3-&",kjpn+#GE`I03ݽ`1X^<ʖ-D!JR%1yY*e%_W JP8p3Lي yQdJT 9xIP1Y2#+ύXJ'k*"J 'fF=9X7pb%O؂K^D{TTK@ʰ"kx䕠aǢ0wss^QbݗBˈ2 r>xdwTR$HK VKEȻ *㊊yO>4F0MR5ǵOJ<ɹRe˟7qAz9K^j*I%БDq!\IkGfÔ)țf{iw^&ԙj{;*޻Ԩ1vg5 &Nqt6PBqI1j8{ԣ8xy M`l4!~4.;gr3,|&+gsD]}}xa>20/mC߼kX>f+=ݬ`)=)AG5H&ǭ2J؆ 8i@@dRJЍjD-Z.X4T44?P&/c\OF(.I9:׋D>&X0LG`}Z<ލA;i4/k=^ ]q[^[gW6_.WH[R>CD^;H5|k]w"ANUA>wO:<*5gNP HA{ *mL+ͥ3>Z&{^e\V5$dtϰ4rTxCy2R򈚢1#+TJ{aH2UVOV55-{\M:A=x/0^.  n4JdA`Qj 2Η5)i 盙i kTq9ujw.O+1j\ 2)5x5IcTGxIp6K^AtŴU1*8ɫu LQ|A_<>(Dr;>-5)[UD+d*ӺkuZNP߯Os{Tr"\_E4iّOon޾o)~^z]J [~w_EӇ1՛4~ٳmlzz_j1WZI{}|BknMNZr&ӳ+wWoYXƔ-jrC]|nV춱~;N͵IƭyQ&ýS2Hi $Ĉ&F.;8$\/oW7CNV4}*k\$ `mZ(\dyiSM=^͂BkB<824/xg:0.0L&ML+eMMhbĚfh6"|2:@gQ\KgeDFk%27H)G&?xJ%~drѣLL{AMV6iB P-- 61-:y`>8aC`t^[gN G'hL,&ID 8Ʌd8R2M?'O5fCG9ٜR4IHЍu6IʈJ }磤N dievA Z􊐐i,!.B  8XG'U&¡}> `̓ Dz͛ iqÄa} 4jTG ܥY&jE2@H %īTC DS t⶯VU~y Nq6~M&`48SҠ:rтDN .PoyX\(?2 D79F+V$&mD"M;m]!ں-ihHjOE, ƣ|Kw[x_yMy4roYF")Hk;)QEIq"7XImAz<粁֪<#yCsr؜ՃPcjdEm(Ʀy8Ő?cR9iWRjrNǮD+ - r"x}'%/i9R:N%]'"Od%e/xgD8D=cΞv3tFK hY6w v i3Iux|r @* %V{"|iVf. FԒM1U1yzr@]4e;ΰ^̆W3N Bv J{*?j0? eHTTPL*QB`./S*YB8 T) J ٳ8ɒFܻ"s`Ž+!ra|*ag IBF uZN†(֊'V㌌RdO᭵FJ¬oe1bԒ@ (3:>B RK镊s@iRSFIaL>s8%XFrun;+8)hZサ*^K57\ZAmɰ;mZЖ=^ -'syA'X2څ '~u~q&ۗ4,ç?֡DW 'X2cR"7rҲ@}֗s)a5?;88&<,ܱ jpFSptdV'8lc>eI[RmKY^rۧr&ӓݘ>H$5PӓݘCZn@;uhF9t;0BePZ ! ZhuĨeNDeUIE+Pۃ^Op.[-5o?\lY|/2%nk^uAWW7ܦu-Oҷ^}wíK/W~S#ק~S^'7 HԶPM~/w&*ʞ.N^lgJߕ+5lcC7kÿ*7E^nU6Ŗ">nFŒޭөmwou ۻU Z݆nU67> [H3zZ JL.mlillV-hwB^Vni{Mi2 +CYyg=CZHς5̥ѣ@h0YQi)5HU_Kw(B jD. 9@c<*/3=e4o[ˤ(Q-~EZsUbPtTQFӴV x+cӴ,Ӵ,䅛hM G-u_]A-%SF6n.鴕&?wԻ a!/DlJ Sk 3u푠`iIZ6gYH;5µ4RFXO$)IZ&{b-Hфv4Heu0hhS;dnZBE/B="RFoQtJ`{(nU6%a3ɾwӄQJ 6mt:out zZA.nU6%WMr gnNm]یMdkۻU Z݆nmJqTLF^xn!rs{wyZiɐK(ڲі-Z="ShK8QA[v9kr:G@S6 @F"wq7kyNC˃ȦO¼@kDN iR TT.b%r$m V4OkUԖftdbӓkx7Q@A 9z5~zr/z3h2\G^n\ΟP9vG߆RZW|DKhtC'=}-T<$7H'ωF9@~3*&S S!̿} r,BQ&ԦNbEB=DlQ_U_p%111pGWH诿rQaPiB  e.ɝF㨑>":SInu&uD}5&L/n B W &UipC6!dC0[4d=sI^gBP.b6a7g fe@J5>{ъ&s7XQG}XL?ṀtMY#(Pk"gI΂9F: Uw +@LA R , SʴOSՠHHVPR.i-bUIW(r,zxtD;9u/M{fxA"ޓqd+_s3j_ a`;Ap؈Z%&bw0~O5)Y&)l쮪Yi)V0Eq^9$B$$ts:цNjBOz6֛#F)Φs 8?әօw^bnb4`^?'?/$}vߦ!<im@%Kon.v({IW3q|@ cTTnX-5ۏbЂ&_<T ҠT_wd:,^UbGF bZ}úGՒ:QmqԆc= :ASJs.KXN(E1H1RɄ*Ǎ"2+'))#0v%NjGݝ+896/^wjNծ,i`k"4Lr,XF#600EDY[ƌlt5MۘXM{7fhfCѠI(j^U ߧ:)!/b ]ßCՀ,ip33Co~Cx<_ o)QjCumKYaǽwe>%kNN@J AFA|FQ 3 Tj9ǥ>GBCG>&j }bkr`v ׵Q@ѷ2(h J iVqs] Q mit~64GwX) BseO $4O~u<+E͟d0U>5 qܼzNe cX4Fkr*:7} xՑn.Ԝoas[#[ˏn'a.A"$P:)I@=d2CqL Es]"#Y"8YVN- Y$2KgKw ?VkNFrXOMUGlOx" ģWȫ)R3.0hVz9ME>^FyøKh<#z~|dHo"YT?zLj4?kݎ'R;3l_|1A^̧Hᘬ }/s^b^X i"ϊ~34E޿|M/FMD.#sU>Yr`qzT{.(vg8(K)֔z4|.ґ!㣷{{#PIJ kehTQPp BHb+ 1Z 1 =Ev}فJ)$#7Cj~Ɍ )L}dGy+:bNvwzJCz{! |9^$i睲!(rDт)}.(8ƳZSK=W]g8ӑϘ /4R0sEC?(BPSʈ<C8fl[`PjV>w8pqp.hbYy4?o_) .b xЇ%ڱ/:=____|94ih FRV e90E pj )/E"D+gCAW迯ZoӽŭKEjof܏/'av߾=,Z9*wDp^pruI͹D@bіƸF9q5[ D'źX!8jP辛v?>xbyEJ5$2Œ8+mC^r"je d JF%".O^DuP>ˤr\e#C>_e=q5՜R@&C$:JeS%spk!{5)giqKɕY')&t#Qo%O'g\O!?dla|USE-s='a"͙JI9sa@ ā\/U>׳NJSɨHTsMOQ 9t0IA^{N Nፅdxv"gYHS{$xL,Bm)iE>P<6i<ʋqS˜=_"#J?RG#i5Fc E:+\;X=P D'pVwj&(\aœa  `c-B†TDk9!9H6F\VFy( qPmT!!p=U3j_cs #ˈZxeuiKUȺ FPC3P#Nk#sw.RRK2%_2Ď˼W{^%{^^-2 $-y)D yPF brOl g\g2NT+YwL!8BX'%HY:KCA7Plq)ݬ閟q* xˇ_uɻHۼXʏq1Q(ؿ 6ADtoO_A42`†wjPxusJ .Ռ[38w pR -M90Tkpq"f]Nu1WX =391a &ڻs^eyw&c]@2%fcaQW`<㥳Dj2. .]'D6㥣 k`TͪȴAy$ݩ U}8ӫ|2OCbpH(1 *cOc 0A(jh`r'g)*hԤTaV6P"mG NS R`1Y>8)5DIFN"I & J CKyvdRpT5)Ѫ1B[`Q#@wuA`dJRLȓI\N!0AA)q5/piL`:BڻsB8;7R˔L`V`;ۥ` Yo!)lG\bGmp֜Pl'S5kj"#UBlwMc2k巙}qI@b9e- n4F6*lo2_`o\ W #0zPA>>8xKʲ*%v oXCth4"-zk&2{WA ItB(0?V c;KTT5TCƖT{g!h+HsOGWCw{Aa,!Ps|8u tO9]raz $K'`q7]SK_u 7ÎO<ǻK¥'A{%$pvh(nS>'W.+%1AkZOap{M[uncouJy/ -0X3Ǣ!=60F}4tNW~e耧l; GޖBe{PbN!HSx,\r \ƣ;FHεUъB2L(vƺRGjkQQ(*F3]7ިx&,`7Bz=Q,1ؔ:WU ηv70!L)z mIb hݠ@@:B1c2}c,OfL~r"A:P#F#{.<~NaBc}vJ2yFFsXXU ^9qYGCR 2"/$DcR 0q΍ y#ki#jpJO9X]K AI F@:(-Ia`kP~ҽ.:We"uKAfRU\Xѧ9DYIh|yXpc07_m= n: l3Y3|=npuۖmL36O?L)ʑn:I^afIA\P-A(yR⽙\Gk GT2$eE<7ª =I=:_=X`)#WC?L/e+Vm薁;+2%R9R\"v[څgbc0|udJ?*s,w:6xF ^I߄RcҨQa,m8yTw ?;`&XN Z@NUZ/K !0h0r_̒bj[%`$ gQ4KiŠ*xř9u5Yڂi NDJ,w<3_ S%K.W +VZz>@Fw`%7) IeIBFK. Q):d+m-@ dn nّds>JJe]%$GP6dmsejz-A2nVbV~om,_^ʍ. ~\e^_CG;j]ϿuE[U'vЯ p'mH  ԜA{L'c4RULpb'"GA$g4W[H0ǫ!>Ŭ3d/8tQ<"VF ;(PpXH\0{}ZpiL9!+[k6C 6^o*-耷Kg{-VH.H) ]qp6X荂1gtR v3Ks#6"kmiR*κ K ?M"'"*^c3ȍKoCdAl"W~ keB n"W.#K\'XeTYuG0!{ ;Nvϥ߸dROx;-IW|@2_Ѣ˂MB{/~M`ӳKϥ]J[b;l^YE"CAؚP /b\*'i`޻uûGW6_]/ڒ2 UYZu9_8&[߂qeme;i#P!E`y5RUC 31dB#\lp9j 1UCa?<{N)JR:pԀU8%ů߬df3 g_niW$~XlUrFέ"]!`ħ9:\wLת\5PL(D[qhEݶFC$q)%$" =/{@(KHbr'@d٭j!ħnt2p:D+ՅL|a2O@iʥ{Ec#Q#wT$KLl2[4w#1)/jgb\pb]Atm±`<E$nl=f,fjytD#9̛xC8$9M7 .$- /Æ/Pw ?k6gMU9 N5*,(Rp XzV 1G&j#d놿oGbB9XEAg=R3%qMr 1TSp${ezC%Bb :{M@/[.KS#I@ZrF%D\X%&^,^Naj#s(Hsp†' CRg<X\FRZqT;|iڰZq\ [֏jЖK2G9}!'T 0\ \,Go2ۙMbm3e\_J)\*KnxCoXFR8} gST'b&a&SwVY6o &J#,*L1r Mp%BASD̅)&fۀcg66A N+ea{r9$LNtp9V9b"- fd6[o;A ͥPe0A!&*4|V9xhLsAdp#,Ƹ}Jq+F%#NP~P>)>;O EhvɍQxxG i}cfBULeՄdžITTLJ&4VKN¨$Jfr&ŒLFoh!8T2`=5'c$I\6S1'zV212Zwo9?!H=$6,d"H^jG_l~&k i]]\קŖ˛sx?X]D'zk@eOӻO.sy34IyH=YCv@yla'yӳ},gyʃ\Uɭ{jrk6EWHw{_w%ɳpvvphj>8'j^`U2rWu֧)1/W  d\I ;KވoKZa0Ǥ!7Ydź)CL1tԮ@7!zq˘/%Zۑg2D<ņ iW nG \gh!VEܒ5N lxA ybJϗ8ZnNNiZq~]I\؊me,أ\嶼<*]!O:4׆mr=or[ nʽRד;2Rjy?jpO?0͹|K:ү/%Nhef3r:'_PsRI̽-woC?x<֕R t)Eg H?Gu݅䤄h AP&h?`4EHOgA%`ssIA-;NV_)1x%48qy~'1a=<;DB2V\%ԍIgdLxS>V';S6= ?yTꆐ_^_}aOmeyYE4:"HTYf<K(D4u4֚UH2=zPFNjCYa?à(XxØ*[Wkb (+W2ycuZCaAz<@bɝH랥k%%YdWALu1 S) 9-%xGZP ! a^d X, ST#'F7ijA09VuxE905 C>gddD's1 PHul)eu +H`Lγwz4WpHgФm>V]bAM:{8OpXf.ЊG<իUSSƪLnmYg~{'w}%~Psw,]͖\YzwuGls}"{;gRzJQdnUa<hLU[ALuNbFtRbF|FTTKn-[ E4I*^Щ&0JOiyO8 95AJ{J8)sumrq@BbwP(#JĨ1<㩐oTU#IBsu\CSZl]J_=S(e% 9XLq*I$6X1[nlQ2#+%ަ}?81[2R4+9#n5tTlLİj^6YK^w%q:ZcutDXꒉ93uud#Ry2 A7UFQN*d%+:Nk)˩NMMMR11QKded/G)I.k&H?LJ@hɷ +.^6Wnl_?F:-WţZϊ@w/ME8_Tph?SsVb Oih#c}|ʍ~S؋)k_clއ~zndUjZ!0Ǥ!_hvgf4j7V=X헧 /^UEbMGrޛ|| ޻$/bv=IsuI-x^ߞ2ļjyO\Yr8p0CAzC-ޥO`$HWzC?? HmʐH+srK*E K l~5k/qnK}faP)5AO {}#xl[;/32ʹaX4V )hZ}ଫ/`UJyNH6/Ӡqi<=GRq3 *Roa/kځ(݁WTx5~Uq{ȾC5r# _ i9(Z%n}QӘQOaKK I! <3׏ے/*B .?{WƱJ/3-վC`9A0Ilq$#*3H#S$-(Rf/6d$UwlUu+;NݯRXPUZ]B+G`5ٵ(y;\Q%f!&sR}D):m8J&$XU\_W^qm;J6mJQqo#1Pͨss(!Ժ%wZ%'CN5S]f;e4ey0QԵPw2UIm2uPֱotJyRޣdRv8Um ݵC]Ms12q~A]UyQ:L7; k%/C^6LTS8j8Yyለuah8JNDW0j;** > nmPيnJt FhBcdZBA£V=lʛǬO*@MI%V ˚!Xs|!icqVaQ93EŻVи0گM. ~ŧ.l L޻*}Z(PS/ׯJ [PRm~!CVQ_bt,zޯ]s 0w}pv~km--I~;,?X2J7}9pҖMC,BkڡMqm^=v~DmէX\!}+0 Cќn:Xsтrm-m%O`ouzߛ~ܴU EKðƃ~n, },G¨4t_p:': uZ}$H/>ikb/&s!^Sf21D\|9i#ܫ3(i>FUbAP)]CP@:xtH`:L/0ճ@l4931$y6`\ lp )DGMfep~'|/Q1E'̌Wgƙ r1'ُ |?2çl2 N˰9 [jH$'; x*uxXr?.ܼb ݴ^~$I/|_{}7Nh 9QLjQ`*Ba8awuԛA~N82bnNzb1Ag/Ii 5_>3| },Ck閚uH7~ozQk84h WsHQnRKɅJ!#w9s8`Qtpc 3/05,9Qr@@ ;*fU2 cdr}jhGcyuߝ^߃4_;XfnH]?vBXw ւPk~ hrf=NgIc0Fqr`?EZ% ~pcFiN[S 'SwSgJFNnR'*^qv\OA#@Y ׈!kU)QZf@q!9(VN~ Wߴzì )peht9%9.0&b:4JaI&b,2r+ n]v׃-Ak<ꏗJ>Xb"(Aќs%1w؊7IkeK #@ BRD#Z\DsUjP(L8~9`,aqD6:(Vh.U=T5{*=OÁP^A :B1H4}Aq8}a*X£կ Ay,rkn^x $߮oP ViK00 f ![:섴AZn4z! ΀m+̷3\Q ˵*6"X%# ^?[Ig2Xxil"4\p %c@"A贠1 \Ē#BD* =<:Pb,/Ff|j܇uv׹ՙ_ )8R$~˝ȲJn>E^|P`KpG?4#߀t/w?bf1!/w?lEKdOO ҶD`0 !NprdJ.}ݓT.R Q!viQ睿ϯl-;ᒶVr<Vd;&&myܶb a &(l"`#c%1y<8 9f3Fos5Ak5k!"O3 5Jy5vET* QQ`7āX#c`VK Z\ JRkAa9ÕFb xF%SdG5 ՒQ9ؕ*m6X2TFxhER5 @!@F *xnChx_ѠYk{9&ʵRMA,UFXXH!@H𠩈9n-xb(jR`!.* {Sh|L g $E@kQ(c$bD10˸fVJy=4D 6bIA@1:=WXx*x F!EՊL gNQsR P#YTB*GpօP?0ʣԌfk qeA~{h cXK%)5׃g (#.R̥ E B%k hD#@kj$O SBBBcBp0Rq!H*fiUPyx C,.hIIhxKɘ׺ڱօZ31@HN=Sql!AX.Jz:A}6HkЛl`l0Wn\UGEVtzeQr[t4 l{ʴpA)FO8edIRƸMIN(e?Qi ڐUR~_kQo(ɞXe%Rz\ҋJםp(i.~ezd\8y+ǧy7]0.ndj as .A 36OZ 7rb9N[SN]!%'/s )>7:u)*1aLÿB x80(ګhTt/f3F]k &͕B(VNjĂlݢn|ow4UIloF!:-#D٣Qۅ].*&eŔ{34䂷ypȊ2]`o$O~fz>4޾!׋y:1-iVxGdoFxw翘Ö[UgHEӾL67T5ICtwYa ߵ !4Hh`I5Btp)*,pj )/X# ׄP6uZ-|RV^N έf2m YdPН5"jp^p)GB f@ Bac A̠#o jIS5>xz'MQ[q+m܎UN}e)4t`i#`zV;Cv8LTk7Y|R޷1ub3"X_mxQTzU3'm@g)%GB *ۻa%bzpRD[yh$\(,I}>9-נ—m}4fMMQjݸC 4=Rze$E a^8%z m >op!Lr0!}SŨl>(EBท ΑOf*pu5krUl\edLzg*abĔ*MK!E\:F뉕y \+UOx>3/kTڭɋS,"˦OeD$[R$ `X0il}'!sNό&O;uW9-?퇷_#|uLx^gnr28\Sc^TM -udJŗ||@4%?| XB*| C0{iEXE@ z`FWAJ ʸ] {yO "90ێo(6Ĺ2߀dOZP,߷*Ul1y妫$OVUPo?9<,nQi̎뗴]8}0000m*YV 2gc>2:8YъdSE>F Uyϫu+$ l(sIhV?\˜-.΃H/y>NӅH]e~R+ZIVJ=X6 &BG ӡ kFh$S(j$/]@Ⱦra\767,ek/kX^u.Ibg– ZU'N&k|R#ڨsƂH|ՁX akǂu3<_恠' htxX،I02ȂC )SN`JGm(uBDK: ak͋<\$*CG8L֞µ/P oڦW 0uFnAWnuS$ ѕԍ)5Z#kj 7iG[yL%p:Ďl5 QUlVt'.?e,Bz{AϙoVL֫=Kp-MEuoh(-bowU8D$*v@pvز׿.\zsj~()-,.3POȬ*P mOT=E.ѓx1YhH{ kŤ~ 50ZD\YfkKb6Pvp{G׭@T&G l2Igj:0[qo^[&iiQ<ԭ\Ã{_NOFQ~Q&> SAޘ{*썡#H!-+Hې=!|xB+[FT#S:px-F^4]6ftiw-yK(̜}^}=yw-I/lvZ ^LV֓qZ;Mn ur~-7{b{zptmnl%O3uO~݆^evQ^Hk}>j+z'w-ǶEa` +a!feуC/}MF~@FfHǤfh'][^ڱ}1֙ 2fU//OL$p_RH{0pRI' 3k6FװhE6/PrOS|7w<6` [VdrԉFj3NOɲeMV]t6;pzBy7ʳՕ2 'z@42FqFR r@4:664㳋M4q;(%!{'t\W:P*qJٹwKԻ+HHOa@L~Jd+7dTkƸi{ҍNlo{3quڃ{m[J3g la '6džXWK8U bhc'v$ƣƚ q]w NJg笠?*#ŀ8ؘZIs)~MۙNqcaێo-&zX P} MI4[\=ׇُڑē4/"֌Lj^E;\ه*|˺t"A?ݐJ J-Ϯ){]0БXFB_hyDFSڀɛ$ :(sZ{9TR !JH\+|FҨ+7kw vkoP[-S"}rʚ|Du @ Vǀ1EzE-Ea-]y*lr"$ ?C}idXJT N|L =>ccMBMd{(G7o0wӽ_SMڑ6ѼiGRF>F8X Uvny#չnPl_i-[_r6h 6&@+BYa5I%4Qp,,4GEE:y ib4̌%ڰf V9*-8v4e@/*SbEp/L-9JJP ޵^g'"m];ag v8D9%8r#,Xhʍ 4i6llئM,7:Q{ h mUnپ=oM1V|jZL\2+v$ree\JYDt6h%L! N'exLYKv,*,]YKsK/-!rzR"FDo,JQנF\3RS+vc̯_P%y̹hS # sZ#I/,a&Qx0&՝|z4|W~/>s7iW>/;M˻pvb0{ʚ7b\G'fvr芉YjgF~C2\^!19u}'gn |탓6L imm/W+7Ddo_>ΒQ(ήX)=*2$(ܔ 4Ek/ל}虚t誣PtAvbmX`U/&uM95)3/WsI̻QQ*]{7V޶|k&lxR3MTwEZi%vEb"sqv6߂L4ƪ%i[Kyˑ+Pcf+ie&e֌qz4"_%5"_C4Κ>G+}6^zu#dI2׬m{~^HYDjyzBio1՗)׏̀h0^/6hg_7E*.H {\IHAYZV8uZֵNNNN2z 4Zl'9xʄ2X'fuں:0OAM. Җ3HL4I*Ǻ6E6A-vT<1uz!!F-1hF_EK2K)҇I蘍u!Jzw1so-N(5%MBqF~$TkƸ>(IEVi)(@q=nMq54pVJc`0W'G ME+ڂ&T-`;V Dsp h@5/ELsfܪd~;18iu&?ҍLa$~@E,d6$K7&1Tr4+b( 6,D5_pa*+}YmnW%o_UƞXڬoax)'Lr )RJ֒mmN52յE$6o+B>дW2E|U|\I{3L z<){oi3vqo31ݛҍAfQea &MQtƻ LsP`6̔PP98%-O5W)RIeDP62N3^\KD5LBq fׁ:`v|c c}߫5nszw*ˆ֎j?PLˬؽSB)> wW^#$Lh` ,k)d|&Ś?U~~froʰGbB\Ã/'%7_>QRQ-ym:J}D'*C=)q2K:f쏩xI=++1Fr+jc?pxsto>>`lCP ?\rw1,R+V-9Zgv8B/LSuSYld%QK;_?ؤR5Ike,6ϩ:Gf ?;}8~yЋV_\ÓDC ؐYi\QV=)))p ׋_` {\EBxt`Q:0, ,D8*Jjxjqf)K?{Pܳ&z2NWO|u#f-:%Q/?c5y|XQNng٭F:,.;fyZv"C^A"{q©)8"P^-QgTӒv*j=MbUӓ4x1?4%C,,tdFmdl(J”Y̔~ Dtl>Υ|8d*St_bf6t9ZO և+gY{m.礴M."Vg=ܝ [XyԠ7HG8:Heɞ0 uB ?,B a^s螱cJׇBr< LSK]爐AoryUH蔐lo>}sт39`Ų{c.%h*0`"r$ /ԥd`|iѼ&}k1ljİ) g򬻋$QNQI(t=~u%JpG>'zID;ƷA:H:>FBHzFc)_kCI%vQxWA'ߝTm^u4<AvKū61(W<}4umђs*sI; *IKfaxD*̭Һtk1"ntmR+?_Ũ5=>__Og8!!_ac({8'^Q5-:1FT2Jf]# ^jz]b)"^'!ІL)tjS;׸mz'D)9ކjPĤ*Ah+հٿξXl.o/gW\/zPr`"39PXuHl+Ƙig va;a;zmA,xˆR`R.ځ n%QX"0FA`XU@έw`. 6wt~; յiLHgi.X}YzMb<6$czBwj/)n\^ZI7IriZ[4C [&b+pą% tF ηh %`TVJ>UCR'2َOs(W>c*)U[!J2b0vycXPn oBٖI@wu<St flڗٸ2ѯWH'&r/~V_/ތ\něv9" HeÊ۲$ד 8eOp \q\Ph- :cg>IGD>Cb,I:j}8ヹ7iWWc5ɬFXl9[׉Iǭ^b}3{"!T$$!r1 ݮQ*Tߏ殹(,J. gƮ-O&ԶމYL8).dk͟6k 沰OBENчߍnQnI\nYFDНg@>&dU.bu6oQ"ZJo%%PU6/ZI/SBY}r??TSw$Ob$&Obx\(Sm7syiS :R8(B)RW4;rŏW P2:Jp! W%c 0A2)A+C F JRb!cQi_pIM?HNg%.=N*9MyG7GPn%!}9~NIn8C Mp~y|>0MzxˋRXQ5J;bTlFWFq*qf;$( JoȉGZOMVP/V/z]c9 QE-S`o/.?^mҙz yʼNuׂ~b%tcޤY!L6rP(]J8eZFcl )L N-S$_a% j9J Qd` y8Uo4y^RR#qր0&R;#8fxArHkiɘD #gz% }ܕDv&xAc,GTc%eyn4-9]p*[ 3cޔ<F)fGNem >f*w]z؀Q,3%*m@=,_h%ܑ!Tb]v5 $c1 ࣌c>#b1ĒJBP@U9D\v_' %1m s%Ԙc6b Tmz Ù1ܝJQY)PRtVn&Qb MGf(bEI Rs)K^ q'*`$s+ei$xw_t~zi]6xIAH%O2P^% 설V^" ^[%"DI/ FAi$)qcpB1H&"xAGqp ^nrZ0 TKLeSpZmz@KE2z6ICSMC {z0 B;\9[Djgb!MHq&\aC@dyl`Hf jީ@QiY퐩g6L$/eAWKSp܄Zx Z#,Gf~T]tBWK;@g83 o?u B-> Ip%|dO:3uOUɷ%:( t:gCsj- bh+">=5 _TP̃y - Z.P(g:JFmne yRZdvV+s Ixd{p|7mձp0Iz6j(5=s6E@)wc[F :!-B9e (h3Z>ϦA @G.`z#勳@V"l++}wP8&߱xaE .zPxoR9 T£m}mKCH},B8OcBgc"i]iGK#ob, TB!ı$˵m9S]bm*S:]m3(*zKMDazcOc%6 Ƅ=bHZ}ŎÏPƯsjb$JwٗE%(sDR0z]("rܶtiEB($P~]ZApdh.4n[h2iSKu'aG+jPZ!$bX)+b7BD_H/Kck "w Ux"L,fZ(`nh 6zYV棅n3b(Ά[.pma٨j%&ktnAuHKg K%q TI+zIT撒تрQaԩ3*a vzlILk'\jQQ:j%U(kthIԧ$ tr" J 볎7_f=OCUif^>- ²f0&\b“ux&)|MwAq4~c` =<3lLƕgcg&D!%I[#{ă/E6jΗ_|t_UCl 0{Fx/6A߅D~P$GahM/j.=rte]) ]0&tvс[5^0xkĭ OSy 3MJ1>3,GۿәqS"l[m@n %L͙&ݒpj^5!悵kkGvºѣeABFĈHD?Ci `6)|*9MӟKJżSD'E՗[%&s`bkd RD"%|us$1qEȎ A&C}0%MJ3^=]OSHBR>^ # !~wOkݔlg)(]=yvQ+8 '-[{$=OE:vh(G{s:q4A7ٽUIMSDJI'', BX,F3Aao(bD UA*sFKmt4ݷU}(+ A7\VCjK)"RKv`(Kl( ;(#N6ocJJ%p|[ ) )]x]K`Wv=V K/%}cAȦG73Mi.QX@ әz⌭/#q/²/8zmgb^ݻUYdVC/|jm'~}ݺ4_JxB~1B:W >X=/+hU2#G\ɗk O{P'uo0+dRkYJts.0cT0=e^ DCI[)&w=| d: $O> n(ocg {x#?<ǀ l d?,ǫ,ŏ,T[,wű(,MRBdVwZkE m29ev{Ad|.m6-qƪsjO3&&CS!gцVa +"s0ۚJj,L#ʥ,_fieK_ CTpq:@2*peƹϔG 7sCclU9SFMRm# (:; $nj#>3׭h `Oc[kEY gurTFZv: VXs5Ib}1gpVaaq&I!31/zA3i||s7Wkҟ.|0o7|,^QnxF- n2g28E5:sˬ^q+$0V$hvo $nпozǢEIզ5u-k3֊/G9ˢ >ڃGba}cڳHcB CbNS C2F##@:X.6iT!4$Viu%Ѳ&7֢%SG˝*%S)D\ 8aTh 9 HfC ;pdys(W{e JRA@8VeKrh$ }**s[d+Q?.-&w޸{`E̛i&n郁 qIJsM`㸍yξ'7~=<\}f`7ט>Z]3 ~`{>Q$9@|~B>Nc.q˽^ȋI4v.ܴwE;H4;m` g1؉/ j<g`K (Xr zF^J1  A2L+%WBcGPF+: C+ea&Zu_a",YP9'RJ 96ZOV=T⁙'X[M%iJ,\=4`@m^}mg&H wkgߜ &oo"&;mv#o̿ uue䞫|#bY?e zȊ,֯8ܙ&e 쀓2/7YJ$)19-ދw0L;ƸB3KNC+q*hfY۹$ iz&y˗#T!3ӧϘ$myOclTeck%x/%u`Xʸ<-c,CjEn? a/=ybN=3MwςAڻ禱[\AnAEJf{ o'.>q{Y'X2aRQeAT_jM@FZVͦcreY &0dl1[OUCd擮|I˻eX(j:e9U px## >PQbPggyfDṲ2ī1ޚT.0K:gW]-c 쇰u8}:07LcR<; A3XhBXK!ؑ}TKZ¸T.[='nvJuZD z\isX,:;4 4Z2?ӪU$U=m(Qn"PV% p2Z#Kgs洊7:Xʕ?{m gB?y808`Zu9k=]gӷYi^Zͨ}bXJOۋBP*\/< V=+Xg6 }ymX-C\(JmtYa^wo@>؊YvHV*+d##e +@_1w]mIfj=O6 eU?;kW>1@+QMf!VoWk؛ 0u:IvUAwuV!\u(0uJHb8ԌL] {ܡ<4:HxO>@P7V1Lb O͇1@ۛt~Ꮧ))'L+5u_߽wy;7pg}H"Ѻ7IC=)||O1zZ#ŁMfIZu '쩟+gOӊ:h2\6M7fxW=E'-̨ppFv5FW/:eJMAq..6ڵ^ yU.t&^zrzџ8`2%\eF v:FI}RBDI/8deY;']4J1GGM(H |cŢuXKy}mԶum(@!Ɲ5ܤVwMYc[CN\ֺÅh)ޝ密7 9eSק/΋5mhUB &ꪔ0(ͽ;5J}9||"4]{7F_/fal9,:AkM^xpB5/}2r|dmG*>?ҤɚO'0Hg~4խh+G.゘ |h`S腸ᆊM_fbtʁokt&% *E8z1D! 1g>.y8l2kR)J0<хDLϕ2XhPJ aٵthQjOs)U!]NDxh] wSoWzO=٠鞬un+%\`/9/^xSh>ozIԌɝl6ER>vjV/"5dx&"[8Tǎwi]XQ uhzs\d^kl+hø2ǒyjQGbE!CіqbfZtǕcnB d3ue - l>Ljw irV)gKZ-kIIe" seR* ( G8Kai}P`_(EP|V.˔4jc-:F.OFZ' :].#E_W^Wr FĠcV)\nʎ% ^NZ:cc&fQf `ʫsZ-ˊ7jk$z<'>%mq9 +ɶ6%yzv?={G.LIk?k;q~z,=:aQ;≙䩮<أŞ3dv$#eu6~ ԗ9 &݈GH-7XNX1bprњL#M^?eTO\xgz3 Q>((JdiU.WcЖ"kdgC #㌎M^h4|| MV?>vfiڲSE@Rjdͅ5C9CI9dZ "?a:AΠqݜEU?gzcG F D$Xe*csV^Z/QI* N$~БMJsy (Nr,ZbqƓ6\VeXp2f]B1d]!'fv'ԫ@z㽭he*(k@lq1se8,&+tQ)'- `sʣ*iך l3gt'P«s*KZ'v S#xqJW\}P{mFyM=fd1A.w"X0(Z̫q {cl\4/J08$/c'P\)*;I`.d( h2ށ>l @)_/+ZђI!Z/+)&=iԣ?]=,I}@e%GRrmG|2(Hhl_'-՛/?ibv{/25!'L+/kDM߽wy;7pg}H#4ߞLJw (7?Ddwi9xE7g޺˯S|ҽşuNeAm 0LkɽpzPƸ=`e]4Gn}A;W] hF̻ v_upVӋt}ɹފno"~pes݌y:_תoHxӐR?y9VxnpB:敬Z Dl4sH(ǭmV\ޞmpVp`HrYܭ*s1+XM'C c\6'G8'a%!HA<39Rh+Bd1EhUVnnإ X\m>|Y W7nZ1u!>ǭx/]VG*f7]FX.T@X`vbe AŎw5[}fbN=j^hM_qK9ڏ#cQȵi &e謋 Cޢ4qzKpL},\d F7e)QE }Vr_7zl&%H!5I`jJ ԃ\BcsF-U/VPMcP*MVXEiBQ2 7*I; \t@*eҦ A%P FOߛZ,0L #D8_N~]¬&¬F4/胞0QkogY[.vwOXф(YIk o1|Qg{4+s<ثalz wB6zޝpIk. .Dܓ"ST9]'d4"h W5u8,W1L/C6TirP_K{Я5gs~i]*d%7,4 yY.pGVCBbxY +)f{Cw3bC#jC?|EroT0С0HHbKܻe戈pt'$UW:E5Ж0(l`dljHFl +ˣgfAiD׮JPdDb"$ F$&h1JYG 3l0j#'hM#)Œ+X)&8-0^U4ղ&A`!Of#7qZ{BUQD0Q]YoVwW oZmQHV*M"l͡|[;RI*X%]k5)mtck__ci>T1vg/5 BNY[\{h7Gꠖ޷q4`? "mr?$wwwwj\B0Y: T$T=,b6Z.2 ܯ|B,#F$\IZN$&%܋tZGY6EEVӆ:-h;[QǓ~X}-WZiܸ+jLooֱs*4dV\3 %x;F`l3М[3SF*P2yлb!Fmb6xe;ucNMj'`g߿hkJ[ThPH3|> Q<ИـrČE* icӑoWѯv>W*$Ffd͓V!)g2|1GC)';PVB~4ۮۮۮnUo%HXi,W#!PS(aCf $%1:4҄DiC&JM5af1V?QطHulZ: yoTL &mA ^61 :G 5rsAc-% 1 @B0F$q 5%5&̛[zPH+%gH*P0b!F* l0"H&ж& {Sa>@֔$1!N" &13B8ik`0iO@ѭ`oLF]7aK@ސO"ǿ2\|O q5j 0<ΏO_ඔL("1=?"G;9rD!qY"x;0.X_ 'Y>.t:`8{ [?): ʖt>7?i, yHMHp? ʑ8'V$Q.vX;WrRf`R1$ĕpkTĺ]!$PQ)b =% Az?T`.\;:@ʠ3":n7 Sة'[Dݡh%Ct'1)oɥ)Y J WrV0 m^|> XY)]aN +9YsfzDܡzR =DD͉șt|©}bq9֪-(EGք-#וokj@Jjv}a9W,m{SnEd sOz^h5m W!\!cQ"cf|sPhr7 ]Pm0(Aj&P9xM9 $yϹ!9D'

|4.~E  l@akkwLo0ݐZP~k91oUr aX2L-+M[r;Y+heD8f֔BVF51B&$ ˓{ XD>}ڋ ґ* e<pHDn"Nu2WcG떁9}+aHVa<5w{=[* aIbʾ񆟭o[@α:QV>[9 7, Uףd5<%r]{[gdLmR[wf"!{]*ڵe:*K EHNl%#ç̶ *Tڊjhci W5M$"ݙKd;Eۨk(h aV lFyJD5nRwUu)r0TepDtͼT1 G 1-R.뾾g~+?K[~tFHJƶv~j$67p9g}4ZnDq 9Xk_:)ǭ^ LiZͽ!^+M0BRdbT$JJ$R-Ŗr"RbJHHF"X+F)>:bCJHD V3ҒXXdIlбR(dE"TZcΘT0ULNsh;6emWz̐JtCKAEh.)RsE$(!bah" !a̹ hj6bo'}[-C\}p9|bB~s0 ޼Dg<=q.sa0:ln(p7nn_yF. cwvoOOxhig]☯ehp򏇽zz`ɛ_v} N^vN޼,,*ԺYK8l9iiͫ ǎ/YſpfٹSrj:&?Sφ=`uo#|nʵ87aKJp<uZlbS& O6y&0r<3Άñ?^?Ӟ[tꤲQurkEH>?^Zʦ=6kTyރբkʨ?F64+Ro---x) 4\^:xGڤN07u2^ň@=f^}qT~ث7mi^fr^M_L5TB ӛQUmJ:3W?ah$i0rzъjNM5åA}y2]EspLVC`Oǁ`e o?$O?(X;d}S|Qs~Ih{~|۳}go|#Mq$m> m}sQB[??ݟUݢ3t~8b-R쟾vjgz|]ױہMgXtrU;WO׃z+>GZX%w=|}~pKZv"9yg~ot<v }ac=ݖq)BJ|t{ryS^D'u/Uz +^O"l eT<=<:d)Vza*ZӛY-?E}*r\,"$TƬaUZMcȫ|^6qp8]1fyd;??7|H֩^ܸ>W?qǾq-dۓ>sFi닏i잛+1:;*~~ǁs u;v]P{" 5x)\\?}5S9V{a~:96t7w8M6xM_ڂ57? z"fœ{N A6NNtg\wnܸzuCP? fǴg/m\q:|>9l &-([G•*NmGSerܢ\azOr{jXt@gJ#:px-Wx)Da4MFYU}=\^ -/.^Ȼ6>*/ɭ#߇uùoR\߸}Nsk ~T$Y )6@V:u$N_|ә&vAO]4^#@$ķ ~f6a%)hWek2nwֿiC_ờfq㷮lӳ|TV23 !>,ʤ,1uJcznDM 4 yluzpI-2M8a06ǖ$2 D#S}O0m/ xH:jU*XYȦ+ a 5IKXlC0$!4!$X1~9EΩO=Ɛj$IP~${n\[{TN\>%Sm;նS}S1Tog-\`TV$ %S TA"cL$ZK$wc$ya&q[(0ZyTaqlel 12 `2m PSpf-Q}wq r1vUlqݾ^I`Teȋ9 AqԀѕPf8+ϜTu5i;δ+'NDnͭor0_WLAMI25{~Dw0`pjXa.gnP^7,"ur+̥Y>.i= L\sg_/~R4l2| vZ=_W 3iN q$%&@rT= L$̥Dqh#%8Թ>'`9kt^6ȅP-yWdŞĂU T̒DejŵA&NY^G%Ð>do(ȃ/:G'(X~~T"[BH(kknVE嗳{j)~qvgTjLr2Y@NdYI&mEQ I1-@ht7f>wyjjA<.X-;i0v|~(-1_'7yU>czT0iĈ#"b\,"-U Q:V DbI]j($,bHi"ec *UEĽϣy}YFYoMnჭj (D KȘENr,uZvcOf\ BPbY|):W[BK\kg yaLܸKtkO^9jd7jQR<'nm& (tY} σw?eug.ZU:[<(!/$:[|ζ/O:7~/8!Pa n^pd(@k?>x\ŹRX!VFiekc,UZAk^Ԣ,oLXPNoYµԭe}``'%MyS 5>&P%:&G&O]ɰ< ^ `K>__]}}wWxU"KquoZDg&$W^E?܌&2!t~L:$JSHu qi[zyUUÊ=7 ƳQtq~^[Nh㒼kMyE`al(hY2ɗZUj WϤaAv8zFpc?I Yў1V򎂤=V嚶r/YJ =b*yW fa7޴ZɢaoLOmڂc~'Z('Z'Z: 3\S >qVsY4'EX< <؏?83ew wf>ǩHXr= [07Iy,}g57oN%dh5aEnӝ{O!ȕfUHuddŲ\('dDgҵ-41_n>!гOr1D 9*s͜N<֘Kf.X ߡK]HP!w!a5QF7ɒS0FNY  BqqB: ,}ww=iYKN ~*ͺvZ!\YnYk$ՇQe{:Qltpݻ~XrΏ~^g1^?J /Zmk:[ELI.ޢ/k7p~[]N1hzivMhvkBB^dJʔVO<>S8;a:Mo}kUL;s?_r)9h}jAR^y1< +RȀ?| "h4*Mg&f㜯wXͤ:^$gO~8 c~]ߵk/^|xDK%S^*Z! K,Sg'jHΧ?yj>f@]p6y@I+L^l4v?Jjr!x%ڑaҙfB\>t1'cdƏ\۬cR !{$6O" O"} 5< EH)0ZSN\["` nvDl5xD;m]|,Qh==5THpsy{K$dJRGm*֬pm)>\?C1-\ L+oȆբy_]߹lSCсMo磛]rw4$p bfE!M}HAZf֢p5A$k })bD؁ X;^ @/L~k9j{X(J׎g 0ЄUGqRaC`0E{X XZ deӌ(҂ b[snSFB'o%7!XKE DHJ"AC½$,ńDŽhDD5ތHg21~V1xa*S^ՂZ87az=9\{e2~g3UyOVwjVo$C28~\]}~FHFbTy73} PЏ{Kd?d/v}G 1ŜP'r _-Ӯ. 2DK g%?#,$$EN5? 1 &,;LP䶌*$Pٻ frG PYT8vb:Lӆ# 2"<,=3yhzy.kpmi1X2^/2}??c0ekg֛"ѧ:;tҭʔ(r`Je*kBLʺϫE @8 ^ājB8Ӹ;VTlVHy9%T Qw!&jT8'bYMh0|wj1r2ؓZӻGL_B q qIE{˨F<ٱ4ˎ46RWK3UVFC3u0`ӚmoSFP"Cr v/8M,ts cW7_PEb&وDG]ЉXY7Vp?l)El'}oG?~Zco4WBvAo!=#락9\-x?} ]:|s{WF ZD~+k8笔A9L/;yc*2΂ JJդ-OflTA>{{ϳq4b?}NGLLN<ۥ*m}X~ϱPv.ME JaL i.\'~Ԑ tqqjIEtQ EԨ.,qa_ קէ,PepB ԧUPbקKΈ3X!MJT!cZrAԧקSO8&ӽ>=>(#O+Q# z}s %UeT!Shq} ?S1!i(aYc*Dh`A:T)&D/YJ ^{)Rx]bØ1r8̥$1Jh_-rQMYD8eVRG@g2u@%p~c7*% r\tUX Úakd !:&< D1|Si[fDR)E=דuCbiSHKX81TĜQiR,SiE\8C(Qubψjߜ"%rwoBL DX;0^r&@cpZZ};pٿCq_2ַS ߦ/cJ1v;UcRx!îOU Ȟߙ*%1ܩH L0GSX(]D's{ߎ`{&ϲfnp>,.Ͼb18?Di %:uf^tDV֗G5f.}rA)'_.k1P]l^L>#o9􇬮݊5,˞frE‘3I1 &Fڽre盗XYД0豓ޮ!:E ̛ ( Zƒ;J%&,u'C ˹jNڞrcuݷo6 Rs˒L%aG")0e@l]v!~<_tH;7%J }H_j2@0m{`M:._iQ֥RIռwf;hf-LJV#*:,LZǿ$oV6>KV"㍊RN*A. V;$;0TE0T`ZKX)LD*Bt:dv+Cx lJ1[zk~s3+klͲ9U4kǟvy +jX.5Jfѣ:z^ow!o-7CQnW [´7l ؒfɟ`v­F1<䅵ʁ/5ŇP)|A`7UVK*;Λրi%3zo W#OAMnУH@]@'%W-W꽖N齖WH-g?"\^˦ׂ30Jh9Z!Z$Qb-҄@Z2dOXoW[iQh یՃȁ(bI^[8۴8lZg_xL^pLg2Z WjXس.Q0JO+[TpN/2oS BZXAN LU Bk$,1bXL 㜳F2VY5MtB5L4Z =JmΆ.grF|$C"P@9IqMtBH'.T G YE -!I8+HH&q(8ӂ2|%uEgZL1޹ܚ:hW첕}  !=cv:@a;8+ae@Ѣ$~=:4Jd0ȮjuGtSDG2R{Ԫ<^cVF,#+C(0EǐTCW:FDfy;9.x`9ј9޻`5Lt`L*P{SB+5囄Ro"A%Kh11x,%KQRb)Q][o#7+_8x3$N ݗXr&b[ݒZ6[nɲ6LFIV}bYh5LhL5 ?\{Q:&WMtRQwjԛLpMڕ2P_uI@e*⇑8 Lu.eL]Ɨ+gHfUzr8di!xNIY@ N!(ɹqeFA"D>@#`}dvoP-wWCb~2?cvMofJMܼ7;+luVwoaVbz#ػg z>&V)2X14 Snc[%wi?2]4}y=E+dl+.{$wi;IOI_~ Kʴ荒 ~m{ K4{ws.[UlOw ^FVRxLKZ' UVQE\XtE:`i24c {;(? \8f- Y J/Z40@'JcʲC&p!_n!-?cHS+x I)l̵4m+d23śR=5YM 4ZFP2\VHM#hmňCp&\J(*_g(% +W9k˒J'#K$ d[R: VU%^ZWr/*&OR\D-K5eRRh D:w` ѩHY:8KhETy4{IL~hUKQ]9ʬ|i)1neY5JQ8=On}J'cW0r~v'?4 kC,B< ӌXFri &q;X唤{AZ=d-_w?pc4 ?> 4ns>.V?s}=;YDS!NFd:G1c)p 3cb8*OKz2s:uU#S՛r9#NJ¥b\?__cSI~(Sc%5\)v2b6mlSsFZ! ]sʊiɸ78653dsEWӋ?]  3ԟLMǃ ^W>h4 ь')J 2@$V %!VbpהF 2VLrL"Z߬`frbLJk%QzEsumZ$ph(ͫ$j%箒ҶL pb$ ocX*).+$T$BͥjJTVIۦ61Z͝PlmidkYGR6YdKSJ]迄+;c-i%0φ. zϻA:MWcra~흿(h zY@zV>LԘ@PgqBW 'D80کNN׭`v+_F>6̌PKo{ xeTqGG=M`֎IJkq =89νViB8ڢg# tമs:s >B^v$ۖt5ZHbpI+HtCf~HϯgxhV84݃_ׇGyJFSx:M'W(_ K#tzTӑyHuނ)@dt JxB(>ZDi:k!pPzM"Q$y~9%>qˊSqvt:p7LPpGF 'M>GӋXbt:wd4+SWK_?sszQ\a`r⢂s xULXZ 4a2)'TJsJ|I^(#%`s VGKg$%u(}D)BYEe4< p f͂9aYbɍxMPq6e>R;-KtQ2B@E:t^v,tR0(AK2Hqg'Ap,=hʡ>(V@lb+ 24 %~IS8t.aą8--C0 0Em0"KЁ;K%3|-{ɉ)85qQ8t Vh(cAhq3TGG T7i:fq:0KWI!imi+ʇ*gGhxj(|6~.4)mCɭvqYNo!w9TyHwqts?|_LQ݌闓QW-~4:$XORU7nŸg]LkW֟KYJ?V|?U^ *{}vb,4#6h*QWD\ އNLIc'8NȜ uB!1 qJ Tӏ%2I sDj Ē }bWZ)3y%q_cXx^:RbvAf\ET]6:DU.Va@ CWF( mZ5zb3p{L8aIi}QJҢ+qiG=UDy4ikDVK> g% !YJih-%Ӊ׋S`#`&qlR20TvQ#r/{Ms|l0s=g*ɸ1JktJ KDl8Ę*p#2{N1D3/G0[\\1a)VڣcHm>77Jj|iI܅]f&DIl$&p-AndQAL 1KFdYJҤťYj0h *Bf9ncFU 6JA$WzW2+h泦]!*TY֧~6QF \AMh*ѱd}RR0FVdU`AG65 za%;%e~jdWNpf4h%,7Xā!Da嫴c̴Qm% 9k\ibղY+]h\nĘZ\mbv?, vg͡\Ȼ[D#~fI65фB V-̒呥p\9fm%&MXVܦFc\3Rš16;]~^.;{?)~n~GcFHԴ,UzxhhI秾 q֩n]/Qݹ~y;+ľm0!j˻V^A4#"z;_~*D/ *wwsvקX~Ym#ܫB{K m$wX- )PVҼtW`A@mJ\~f#l0`pRzw`W#BW ar)ϊXP.X֩ɈYJAAS/veqs.K)錏00P&LQт3*X]U1'3?V2q9 v1DTgz!Ϛ`'5C}2&2W{^-$E?ψS{5O3SD 0~DMT?ϳ-ZWƅzd'Oןt?"YG ـ<{&d|3ٙ(Hk 慤J֘t|qi٥f90ڢM33sdf}.{D oab'WYsP>1h0~zY|v>Z}zgyB6mA Ś>X"9Ҙ.hxSwo\7onmD]|,x돷`sf&S3}oѷ8z_|jejiϬώ7/(>aC+OӯnB<#]T L1,^Q ,].L*5cڃRgYYF&Mŝ3$SII$FSmYGU“*'uP=~tHieԲP/ː.}~g^S]XCTG#,ʒ[ɢX}el?` (̶Clszh֙iS|Mfdz31/> 3,v`w^[YP3ڶh)u]rKLJ8CdGc&.QB}Lh/'QcTZէZ bu !lgOʦ$Y;-pS9df{V߫]:%ugWs0MR4Dbg[ $L U`QHkIK$42avfP:l B [9'>^:6u `^<+,mkjEq{:èz+"zQOlgZ(-/N>RZԞvi5-:{QS QPOͧj" 59=V|iFs}|!ݝmicds6ǿa]'Kg!}?J1y45n]d7򈩨&Ǒf5[!#K-QWMړ*~Qd!oD7)=e #u.pj 6{:rlG!<jvA={vDFӁ-Raz#ɑ_)efW^x౱/xvkGҨO/;#뒘YG ݘ)2qc@x Kv Jь#=K4=5rڢVu*8N2Jg}̯18gLOG)P=ϽƎ)80P-%14)dpHJד \Ni!-U:kTgrPP0J\ZUvA -D Xr}t v{ &{{pټh5^Z-=\8/޸OqW*[3[+WX`Lyb:'RR$ާ`x33C~< nkՍ?V,:OD6D`~gLnL}H[(NWVOL4lT:yE&f3!dzH! R7D`M~NOH{Q qe,k&)wDKqe)_oi?0d]:Ѵ.)t͟&.on觳g-V=qᬝe6[7T@_i_9 _dsp6ܦՇ$~>T#[ S WiTsE璣P1onA~]=Y!Ѽ5Wc=~<_<5Lm6Z;r!t )&r"4Ub >k1`kr9:j5EaEzƷf*4\=ǯ> X0ܚ1:au!re oꇾeui}˪.[6'cePQ#n%G/5ҳgp4.Swg݇`4 -ͳO%[z"~ ٣'DUQf Ys}rfls93cOt(ecTL Q &m2 Oa^[ذ /sm~ݭ Pg8P~x.uNnLeG|7WVJBp6!9-_|NjI^ՄFϢo< !b̬LHmۓKVW%q_DW<Qps_zh?§ߵ1hZ=Il>#Ou5P585݃qHtˤNq+ӂ_ -M252٨}c]MmVFe?~I bBd}ž 2yJu;~)<\Sۿ]v^6<QF®J`iAF쬂 RlO5O$u8MGšOӰv75nN m#9^+)qSH"iiΟ;GICS U1yé?/+ؤ5д<]O?6L4r~~b]ź.u]tap{yMwStΆ{Zm 0HxE",mRx䯅[yczJJ]w-?y=4ы#۫1d2nc[Z9Z`ir)Jms~Lٯw}0$ƈ5s(YMp f xĐx}ba-ž)NҶZr~mJecچz)X E,d5֒TT°kRJZws1RIXH:&x$N35c4Rjү ,yT$DAtB1#䬴() (`!+`*bٙ)IW~7'oI⡽grM=w]ŵ7~y|Z !ƅt\wJr}n$Rk-Zy?{_^ߦOw9M)g&]Md8i)4􁖰~?+-r6׹Cٳg,^. в7oq6ڐG; T\:`I.fZ+o䐂 !FRsw_fAذ{mghϮ`g_\6zGp]~N ׄsDz   n/kB)"HUȓɥˣZJjR[¶~y)\-П&^^]@3DkuDn[ls+[ =rMQA|BǜZkƶ n4尩Y? oezU$_+?HTe%o`|T> Ca<čOV%qAO-dUEeZ~,`٤&hl.ͤl([;+(+2_ڞ)*)JS0 >%I͓#E)}6RPT&iY8xa1{ *`(A]]j^ڗC4FF3I#ފĄ4&FC3ϟ/<0^l毖8pRwܥp;=xZ1 Ks_+u9wb5oPrYG cH?Fjx ޏOH}SvA~8bbŵdNFxH}\su$i@"6r{WZ*N:2u.-T5|ޕLA_/鸸<$\" ::8]qk`uC*5P3t[4ͷn1sG=WLjqŴ}N"`uN"NEƏ+ x0 D:},STRTz*Z +@v2:!N 0l)Mfz/jL|xznZ!}Bb%8-@ It][桒H3Fq:8LƂM&qhtF9?wy5f@>LүƠ 1_&Fؤ]F ҜQy"^ƏcV>Ҁim% {Tn/u6kl+\D˽f15>jr .0ޛ_ϾB ? S ~'^gVjvc/ bErȍ5@ C6wemH0;H q(On>lwtGg#8-ar۞DE)—@fj!o:^NfϹV 5I#:NR)䫆B1d '뙕V9!/#~ioS0ƯnDçw= Fkrc>„MhRV.o_t/沉=MˎXV]¼,J3PY85(GWFzWEIUўx;fPl=h 4cuh $%PKN)0D14Z܈^N`sC!I:x U&'eG|q|轳O"IiVb|%u{=!ެy8fO (Z)Y$5fMB <Ń<HLY~W-<+^.?u/T6.Uu1yךD`h'1|,M=ԛ9ϛ<yaY:byǰ޸5)e us7,Jw,ͥ)ɾ٢\_jzikx昲YNR21A4q˄Vb׵O=R #Dv^xs/(m{hm6',|{ :NxdR:U)o,Adb'g5 _TNy7QXrqx1\M \']It# rTD1^mF(ATD`,N2ZKZPĞY^2Jܼ?;K,pb_(9l%yٮtS]2q&D4>n JOmǸQIiɛ +-!?M sVqqmjq8$7Bڕ_Lxt<JE;DoےyIDNKY;1ʚN6BIlI/hR* [ACI52^c}i@Jͷ޵;~hcWxqn łrQPW^r4ޔ@>$6CCRdK| pҡp/:>Im 5IG%!99|K}3җBaJEfy"p2J=Ij*{mLV;KT+3?47#2aR=uyH6+)k"(NwqbKC]^$4#@@sBlɧȭ!-It}OXEr$lSYX= IK}y_:g`v-ŐK-~x5|V D\ЊZY= NYʕYt5,kozUlvI}=ıGnN*UxrL4bbE̓ǻ%]sAk=VqZo]W&J%6]ӱ4UrLYQgE :Z7T_q-vg9p}fS2$i(jO>3$w>P#`gӕN>GwBK|?d"-RCc +y n=TkћoJu#z+v;Q{+ՊjguDiT⚃ l)~v5E.sLZgHh%_9\k9d\EL'e/܄|t[ ! oD J~kÓe\z>\i^iG/e4Ty"`\Z7'1BAKr׳W4WA3.Y8)gS~;o QO_MԴ`˘㹡v OQHrY/ U (áz4,a+ i{cP@1P̶W䇼 :/fK'@pfώVl8[Zy-͹^JQH~e0igkyMu7b~ ozȑ_2Ŷ{p0=lf6dǞ-IL`)vVfآY_B_SDK]ß(iGnXB2Z4nMh[ ZК/~vYIppl)/u7_BvsnKK~+N%ҖUO!n"~e=&{%ȥw"d&xh"#: mg?t|f6XƼekCX Fi ޯ?9F6sB90XM0j4*N>4Y񺭬 EFvbގw| 25'gT8=+g[(BUO OK6)N[nSP8~wY] #߸ޮOWg;?˨~{S~}#8'7l[58V_;Ϻ;ٷ}l>JGy$G3.3!H\!ܽ#BUo|1>_>&#Y28Ax2y,!^X ֎<=XU?zp~d^6|rc|ILIp fp]݇ |QMV &HZuTh`dĥIp P1:@G?(Q!@*@cy2biƃ4G,Ni @4iC a&3jo sۅ]򦰿y Ì_?Jk1a@vLXT`S_ ØdRJkįdzwY?,.J24gIDK WūūR1OxxxH1Ք=˥9IqpXQ[n0ӺFPemhX,LYY((R®c%*D&Xƌ2SuiFw ``p%(aP8h fwePT) s0?TQ'O{m&rfJ}n b1~Uc66NGZ.Rk&Dw&/FN!ku? (_v-JsJ,>+̙)LDH(%; =c@-kݬM ?bԊk^:r:}}d1 fS\9oGf(f_fss:ZBW;8Džg܎;uHoBcǧ''Qw W/"p*ߍ*]3JX׆+ykO$HmXj($TZ^xzF}+ Y WcBOQ^LB¬4 s\% +>IIT TcB(; t}1rxa`3q7[AΈA\u7w/G \Q{bog l8\5GWf-R?̪w߂z&Vş-һAk[)ԂXUBh6?Om򴱑@{~^ ڥUJ&Zzcʅэ|"$S}Mf;햊A褾v; Ɛ[TvBBqM)T0\Ft\=z5/f(k Ghr<[-sx`tHJ:t?kv-1&bji:pgdyxv'N#^5%_=Ø#|ǔT30~i]Y?51*t.ͮ>gWšM"7gcAfXDZ*4!DUhYKۉqp'/M.qpiɖk^hhd"Q0\H^X<8y=# D I% ؝(Oq~V鬴fBk BY-,?d=lǭ%$dR]j.T}Q,(z8Fx5O/!Rr@-N$+ wZϣU_z+N)u@;sँC{؝q$7ц y$|Li(r@;~iT\`\HS8,;^RBMG5/ժ`NK5+LȯN p!͐u:볐XVHZc e$ˁ (ӴL?2>w$B]O1@[Uuxv9ߝzE ~U/huDz;ZT9,}%٤̊:dڨWK"iip.^=E|Bk17pe03hrk+6c>y1z^qDΜ yh=is>ܘOWKVrre7*՘0|z<T뱜?HhDtx0<BWwp;/G,Ǚŵ/_mv ^{/Ҏ.PۧI!Ve@sm6ۂ#;hi9~ q84ﰻڸE{ho|t:k>@jdu~FK޳R*f h^N BOZ|Cmx߬fٮvO8tw *񫐷6(* bS0mjn,:Xt'fhLf;cڍ &Ab#:hw6O{n驕ڭ E4Em@_~SiAb#:h!OүEj.$5H1lހKT}֘'l !>S!|=eTr&BقnT鼂^kp`CHO7{.WB)NR[o880 +qd ZOMN0Ktc%<05k2RFs~j0DR@(PYRw,*Ry3}Fμot--S gix .Y1ZFo 9oމG(O]|ěx^6ꅻJ3J| S0w9|*`3cBNRAXBjW*JxY0V"914_MB|X$:M(4v m tRBsNW 49S؜{.$U^]ym!л][¹.G<D>F1jN3n?]؂ÛlxeKۛ aj+H}W^A$䔈#)_At  @ӹ Dg4O򗇃~%'R$!9'hida/QvZo-&lc|=',#',TBX'ӥ2ms Pw$N!OL'NX '(ʬGPkĝRvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002455635715157112750017722 0ustar rootrootMar 20 00:09:05 crc systemd[1]: Starting Kubernetes Kubelet... Mar 20 00:09:06 crc kubenswrapper[5106]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 00:09:06 crc kubenswrapper[5106]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 00:09:06 crc kubenswrapper[5106]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 00:09:06 crc kubenswrapper[5106]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 00:09:06 crc kubenswrapper[5106]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 20 00:09:06 crc kubenswrapper[5106]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.481032 5106 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489784 5106 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489811 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489837 5106 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489842 5106 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489846 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489850 5106 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489854 5106 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489858 5106 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489862 5106 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489866 5106 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489870 5106 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489887 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489912 5106 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489917 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489922 5106 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489926 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489933 5106 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489938 5106 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489942 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489947 5106 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489951 5106 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489955 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489958 5106 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489962 5106 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489966 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489971 5106 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489993 5106 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.489998 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490003 5106 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490008 5106 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490012 5106 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490016 5106 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490020 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490027 5106 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490032 5106 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490036 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490041 5106 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490045 5106 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490048 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490070 5106 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490075 5106 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490080 5106 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490084 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490088 5106 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490095 5106 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490099 5106 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490103 5106 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490107 5106 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490112 5106 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490116 5106 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490121 5106 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490126 5106 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490131 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490153 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490158 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490164 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490168 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490173 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490178 5106 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490183 5106 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490187 5106 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490191 5106 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490195 5106 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490201 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490205 5106 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490209 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490231 5106 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490236 5106 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490240 5106 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490244 5106 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490248 5106 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490252 5106 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490259 5106 feature_gate.go:328] unrecognized feature gate: Example Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490263 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490267 5106 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490271 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490277 5106 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490282 5106 feature_gate.go:328] unrecognized feature gate: Example2 Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490286 5106 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490291 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490335 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490340 5106 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490344 5106 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490348 5106 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490352 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.490357 5106 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491016 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491039 5106 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491043 5106 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491046 5106 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491051 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491054 5106 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491057 5106 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491060 5106 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491064 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491067 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491070 5106 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491075 5106 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491080 5106 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491083 5106 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491087 5106 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491091 5106 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491094 5106 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491112 5106 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491117 5106 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491122 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491126 5106 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491130 5106 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491136 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491140 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491144 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491147 5106 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491150 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491154 5106 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491157 5106 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491160 5106 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491164 5106 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491167 5106 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491170 5106 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491173 5106 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491190 5106 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491194 5106 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491197 5106 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491201 5106 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491205 5106 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491209 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491212 5106 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491215 5106 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491218 5106 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491222 5106 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491225 5106 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491229 5106 feature_gate.go:328] unrecognized feature gate: Example Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491232 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491236 5106 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491239 5106 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491242 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491245 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491248 5106 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491266 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491275 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491279 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491282 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491286 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491289 5106 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491292 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491295 5106 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491298 5106 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491301 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491305 5106 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491308 5106 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491311 5106 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491315 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491318 5106 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491321 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491324 5106 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491328 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491331 5106 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491334 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491337 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491341 5106 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491344 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491347 5106 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491364 5106 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491367 5106 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491371 5106 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491375 5106 feature_gate.go:328] unrecognized feature gate: Example2 Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491378 5106 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491381 5106 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491384 5106 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491388 5106 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491391 5106 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.491396 5106 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491513 5106 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491537 5106 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491548 5106 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491554 5106 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491560 5106 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491564 5106 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491569 5106 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491586 5106 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491590 5106 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491595 5106 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491599 5106 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491603 5106 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491607 5106 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491610 5106 flags.go:64] FLAG: --cgroup-root="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491614 5106 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491617 5106 flags.go:64] FLAG: --client-ca-file="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491621 5106 flags.go:64] FLAG: --cloud-config="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491625 5106 flags.go:64] FLAG: --cloud-provider="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491628 5106 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491633 5106 flags.go:64] FLAG: --cluster-domain="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491636 5106 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491640 5106 flags.go:64] FLAG: --config-dir="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491644 5106 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491648 5106 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491653 5106 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491658 5106 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491662 5106 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491667 5106 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491671 5106 flags.go:64] FLAG: --contention-profiling="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491674 5106 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491678 5106 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491684 5106 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491687 5106 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491693 5106 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491697 5106 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491700 5106 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491704 5106 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491724 5106 flags.go:64] FLAG: --enable-server="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491728 5106 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491734 5106 flags.go:64] FLAG: --event-burst="100" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491738 5106 flags.go:64] FLAG: --event-qps="50" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491741 5106 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491747 5106 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491751 5106 flags.go:64] FLAG: --eviction-hard="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491756 5106 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491760 5106 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491764 5106 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491767 5106 flags.go:64] FLAG: --eviction-soft="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491771 5106 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491775 5106 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491779 5106 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491782 5106 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491786 5106 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491789 5106 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491793 5106 flags.go:64] FLAG: --feature-gates="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491798 5106 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491802 5106 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491805 5106 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491809 5106 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491814 5106 flags.go:64] FLAG: --healthz-port="10248" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491817 5106 flags.go:64] FLAG: --help="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491821 5106 flags.go:64] FLAG: --hostname-override="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491824 5106 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491832 5106 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491836 5106 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491840 5106 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491843 5106 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491847 5106 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491850 5106 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491854 5106 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491858 5106 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491861 5106 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491865 5106 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491869 5106 flags.go:64] FLAG: --kube-reserved="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491878 5106 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491882 5106 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491887 5106 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491891 5106 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491896 5106 flags.go:64] FLAG: --lock-file="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491900 5106 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491905 5106 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491909 5106 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491915 5106 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491919 5106 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491923 5106 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491927 5106 flags.go:64] FLAG: --logging-format="text" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491930 5106 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491934 5106 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491938 5106 flags.go:64] FLAG: --manifest-url="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491941 5106 flags.go:64] FLAG: --manifest-url-header="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491947 5106 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491961 5106 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491968 5106 flags.go:64] FLAG: --max-pods="110" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491973 5106 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491977 5106 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491984 5106 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491989 5106 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491994 5106 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.491999 5106 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492004 5106 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492017 5106 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492023 5106 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492028 5106 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492033 5106 flags.go:64] FLAG: --pod-cidr="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492038 5106 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492047 5106 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492053 5106 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492059 5106 flags.go:64] FLAG: --pods-per-core="0" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492064 5106 flags.go:64] FLAG: --port="10250" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492069 5106 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492074 5106 flags.go:64] FLAG: --provider-id="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492079 5106 flags.go:64] FLAG: --qos-reserved="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492104 5106 flags.go:64] FLAG: --read-only-port="10255" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492109 5106 flags.go:64] FLAG: --register-node="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492113 5106 flags.go:64] FLAG: --register-schedulable="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492117 5106 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492126 5106 flags.go:64] FLAG: --registry-burst="10" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492130 5106 flags.go:64] FLAG: --registry-qps="5" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492135 5106 flags.go:64] FLAG: --reserved-cpus="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492139 5106 flags.go:64] FLAG: --reserved-memory="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492144 5106 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492148 5106 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492153 5106 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492157 5106 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492162 5106 flags.go:64] FLAG: --runonce="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492167 5106 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492171 5106 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492176 5106 flags.go:64] FLAG: --seccomp-default="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492180 5106 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492184 5106 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492189 5106 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492194 5106 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492198 5106 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492202 5106 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492206 5106 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492209 5106 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492213 5106 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492217 5106 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492221 5106 flags.go:64] FLAG: --system-cgroups="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492225 5106 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492231 5106 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492235 5106 flags.go:64] FLAG: --tls-cert-file="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492238 5106 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492243 5106 flags.go:64] FLAG: --tls-min-version="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492247 5106 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492251 5106 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492254 5106 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492258 5106 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492262 5106 flags.go:64] FLAG: --v="2" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492268 5106 flags.go:64] FLAG: --version="false" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492273 5106 flags.go:64] FLAG: --vmodule="" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492279 5106 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.492283 5106 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492380 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492385 5106 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492388 5106 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492392 5106 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492396 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492401 5106 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492405 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492409 5106 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492413 5106 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492417 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492422 5106 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492427 5106 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492431 5106 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492435 5106 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492439 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492443 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492447 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492451 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492455 5106 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492459 5106 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492463 5106 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492467 5106 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492470 5106 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492473 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492477 5106 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492480 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492484 5106 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492487 5106 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492491 5106 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492494 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492497 5106 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492503 5106 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492508 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492512 5106 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492515 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492519 5106 feature_gate.go:328] unrecognized feature gate: Example Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492522 5106 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492527 5106 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492530 5106 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492534 5106 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492537 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492541 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492544 5106 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492547 5106 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492550 5106 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492555 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492558 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492561 5106 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492564 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492568 5106 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492572 5106 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492589 5106 feature_gate.go:328] unrecognized feature gate: Example2 Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492592 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492596 5106 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492599 5106 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492604 5106 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492608 5106 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492612 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492616 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492619 5106 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492623 5106 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492627 5106 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492630 5106 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492633 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492636 5106 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492640 5106 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492643 5106 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492646 5106 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492649 5106 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492653 5106 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492657 5106 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492665 5106 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492668 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492671 5106 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492675 5106 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492678 5106 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492681 5106 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492684 5106 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492687 5106 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492691 5106 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492694 5106 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492699 5106 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492702 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492705 5106 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492708 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.492711 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.493479 5106 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.505930 5106 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.505969 5106 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506027 5106 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506035 5106 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506039 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506044 5106 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506048 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506051 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506054 5106 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506057 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506061 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506064 5106 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506068 5106 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506071 5106 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506075 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506079 5106 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506084 5106 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506088 5106 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506091 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506094 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506097 5106 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506101 5106 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506104 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506107 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506111 5106 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506114 5106 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506117 5106 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506121 5106 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506124 5106 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506127 5106 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506130 5106 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506135 5106 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506139 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506145 5106 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506149 5106 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506154 5106 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506159 5106 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506164 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506167 5106 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506170 5106 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506174 5106 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506177 5106 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506180 5106 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506184 5106 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506187 5106 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506190 5106 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506194 5106 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506198 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506201 5106 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506204 5106 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506208 5106 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506211 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506214 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506218 5106 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506222 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506226 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506229 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506232 5106 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506236 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506239 5106 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506242 5106 feature_gate.go:328] unrecognized feature gate: Example2 Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506245 5106 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506248 5106 feature_gate.go:328] unrecognized feature gate: Example Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506252 5106 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506255 5106 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506258 5106 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506263 5106 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506267 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506271 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506275 5106 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506279 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506283 5106 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506312 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506317 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506321 5106 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506325 5106 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506329 5106 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506333 5106 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506338 5106 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506342 5106 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506346 5106 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506349 5106 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506355 5106 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506362 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506367 5106 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506371 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506376 5106 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506381 5106 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.506388 5106 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506509 5106 feature_gate.go:328] unrecognized feature gate: DualReplica Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506519 5106 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506524 5106 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506529 5106 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506534 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506539 5106 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506543 5106 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506547 5106 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506551 5106 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506555 5106 feature_gate.go:328] unrecognized feature gate: SignatureStores Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506597 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506602 5106 feature_gate.go:328] unrecognized feature gate: OVNObservability Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506606 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506610 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506613 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPI Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506618 5106 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506623 5106 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506628 5106 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506632 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506636 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506639 5106 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506643 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506647 5106 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506650 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506653 5106 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506656 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506660 5106 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506663 5106 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506666 5106 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506670 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506673 5106 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506678 5106 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506681 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506685 5106 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506689 5106 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506694 5106 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506698 5106 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506701 5106 feature_gate.go:328] unrecognized feature gate: Example2 Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506705 5106 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506708 5106 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506711 5106 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506715 5106 feature_gate.go:328] unrecognized feature gate: NewOLM Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506718 5106 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506722 5106 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506726 5106 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506729 5106 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506732 5106 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506735 5106 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506738 5106 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506742 5106 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506745 5106 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506748 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506752 5106 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506756 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506760 5106 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506763 5106 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506766 5106 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506770 5106 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506773 5106 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506777 5106 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506781 5106 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506786 5106 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506791 5106 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506795 5106 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506800 5106 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506804 5106 feature_gate.go:328] unrecognized feature gate: PinnedImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506808 5106 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506811 5106 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506815 5106 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506820 5106 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506823 5106 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506827 5106 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506830 5106 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506834 5106 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506837 5106 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506840 5106 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506844 5106 feature_gate.go:328] unrecognized feature gate: InsightsConfig Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506848 5106 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506852 5106 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506855 5106 feature_gate.go:328] unrecognized feature gate: Example Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506858 5106 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506861 5106 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506864 5106 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506868 5106 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506871 5106 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Mar 20 00:09:06 crc kubenswrapper[5106]: W0320 00:09:06.506874 5106 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.506880 5106 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.509984 5106 server.go:962] "Client rotation is on, will bootstrap in background" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.513722 5106 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.517217 5106 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.517330 5106 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.518627 5106 server.go:1019] "Starting client certificate rotation" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.518788 5106 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.520092 5106 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.617346 5106 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.634177 5106 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.640316 5106 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.665474 5106 log.go:25] "Validated CRI v1 runtime API" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.788814 5106 log.go:25] "Validated CRI v1 image API" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.791795 5106 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.795997 5106 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-03-20-00-02-33-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.796076 5106 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.838775 5106 manager.go:217] Machine: {Timestamp:2026-03-20 00:09:06.825498531 +0000 UTC m=+1.259232665 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:fdcdcd70-d7d0-45f6-8fe8-c45ef984f286 BootID:a9af530b-46e3-4432-bc61-2c5eccf70cd7 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ad:c9:e3 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ad:c9:e3 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:05:8c:8f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:84:68:cd Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ea:ae:8d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c5:84:53 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fe:3c:af:f8:85:d7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b6:10:9a:44:d6:9d Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.839684 5106 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.840117 5106 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.842249 5106 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.842336 5106 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.842774 5106 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.842801 5106 container_manager_linux.go:306] "Creating device plugin manager" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.842887 5106 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.843994 5106 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.845224 5106 state_mem.go:36] "Initialized new in-memory state store" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.846102 5106 server.go:1267] "Using root directory" path="/var/lib/kubelet" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.849022 5106 kubelet.go:491] "Attempting to sync node with API server" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.849076 5106 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.849105 5106 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.849130 5106 kubelet.go:397] "Adding apiserver pod source" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.849178 5106 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.862375 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.862421 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.863126 5106 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.863166 5106 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.874267 5106 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.874291 5106 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.882036 5106 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.882277 5106 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.882862 5106 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885421 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885446 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885453 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885461 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885468 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885475 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885483 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885491 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885500 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885510 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.885521 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.890210 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.891440 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.891458 5106 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.892902 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.935927 5106 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.936032 5106 server.go:1295] "Started kubelet" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.936373 5106 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.936373 5106 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.936686 5106 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.937287 5106 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 00:09:06 crc systemd[1]: Started Kubernetes Kubelet. Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.939864 5106 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.940264 5106 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.940816 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.941000 5106 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.939842 5106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e64158bf8a8a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:06.935974056 +0000 UTC m=+1.369708110,LastTimestamp:2026-03-20 00:09:06.935974056 +0000 UTC m=+1.369708110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.941039 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.941271 5106 volume_manager.go:295] "The desired_state_of_world populator starts" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.941294 5106 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 20 00:09:06 crc kubenswrapper[5106]: E0320 00:09:06.941438 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.942094 5106 server.go:317] "Adding debug handlers to kubelet server" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.943134 5106 factory.go:55] Registering systemd factory Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.943163 5106 factory.go:223] Registration of the systemd container factory successfully Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.943717 5106 factory.go:153] Registering CRI-O factory Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.943746 5106 factory.go:223] Registration of the crio container factory successfully Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.943892 5106 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.943992 5106 factory.go:103] Registering Raw factory Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.944031 5106 manager.go:1196] Started watching for new ooms in manager Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.958099 5106 manager.go:319] Starting recovery of all containers Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.972694 5106 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 20 00:09:06 crc kubenswrapper[5106]: I0320 00:09:06.992509 5106 manager.go:324] Recovery completed Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.008370 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.009562 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.009645 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.009660 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.010546 5106 cpu_manager.go:222] "Starting CPU manager" policy="none" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.010561 5106 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.010594 5106 state_mem.go:36] "Initialized new in-memory state store" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.042164 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.142386 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.144400 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.159594 5106 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.159668 5106 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.159707 5106 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.159735 5106 kubelet.go:2451] "Starting kubelet main sync loop" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.159795 5106 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.160821 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.169449 5106 policy_none.go:49] "None policy: Start" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.169661 5106 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.169740 5106 state_mem.go:35] "Initializing new in-memory state store" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204551 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204619 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204631 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204643 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204653 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204664 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204674 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204684 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204695 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204707 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204717 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204727 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204736 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204746 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204770 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204779 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204792 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204810 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204820 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204830 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204840 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204850 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204860 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204869 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204879 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204890 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204901 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204910 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204923 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204934 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204944 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204953 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204966 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204977 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204986 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.204996 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205006 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205016 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205026 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205036 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205047 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205060 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205092 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205103 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205115 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205125 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205136 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205150 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205160 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205171 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205181 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205191 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205200 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205210 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205220 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205231 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205249 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205265 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205275 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205286 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205296 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205307 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205321 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205334 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205346 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205356 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205365 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205377 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205387 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205398 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205408 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205418 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205428 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205439 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205449 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205459 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205470 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205480 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205490 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205500 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205511 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205543 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205553 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205564 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205590 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205601 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205611 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205621 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205631 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205642 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205653 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205663 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205673 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205683 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205693 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205703 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205714 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205724 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205735 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205746 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205758 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205774 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205785 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205795 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205806 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205816 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205826 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205836 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205846 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.205857 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.206160 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.210822 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.211300 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212257 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212296 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212329 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212362 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212390 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212422 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212450 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212479 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212508 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212536 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212563 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212636 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212671 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212700 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212743 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212772 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212801 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212827 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212855 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212881 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212908 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212935 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212963 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.212990 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213017 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213045 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213073 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213100 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213130 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213157 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213220 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213247 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213276 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213302 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213329 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.213357 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215665 5106 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215726 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215753 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215780 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215804 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215825 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215846 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215868 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215894 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215916 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215937 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215958 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.215980 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216004 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216027 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216072 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216093 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216113 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216134 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216156 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216177 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216197 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216218 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216239 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216260 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216282 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216305 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216326 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216346 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216365 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216385 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216406 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216427 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216447 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216468 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216491 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216514 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216535 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216555 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216607 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216640 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216662 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216682 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216702 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216724 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216744 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216765 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216784 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216805 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216826 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216846 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216869 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216888 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216908 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216929 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216950 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216973 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.216994 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217014 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217036 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217057 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217078 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217098 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217122 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217143 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217164 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217185 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217210 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217234 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217254 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217275 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217296 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217316 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217337 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217360 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217381 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217449 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217475 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217501 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217612 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217645 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217667 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217687 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217711 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217733 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217754 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217774 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217795 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217815 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217837 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217857 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217878 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217899 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217919 5106 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217938 5106 reconstruct.go:97] "Volume reconstruction finished" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.217950 5106 reconciler.go:26] "Reconciler: start to sync state" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.243164 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.260157 5106 kubelet.go:2475] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.283537 5106 manager.go:341] "Starting Device Plugin manager" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.283919 5106 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.283948 5106 server.go:85] "Starting device plugin registration server" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.284534 5106 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.284563 5106 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.284838 5106 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.284933 5106 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.284947 5106 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.301241 5106 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.301317 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.386244 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.388966 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.389044 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.389075 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.389126 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.389874 5106 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.461008 5106 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.461362 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.462753 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.462813 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.462826 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.463651 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.463963 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.464040 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.465045 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.465110 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.465125 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.465138 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.465144 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.465319 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.466498 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.467411 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.467473 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.467775 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.467841 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.467869 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.469163 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.469374 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.469487 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.470131 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.470181 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.470203 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.471481 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.471523 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.471542 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.472638 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.473034 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.473088 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.473392 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.473430 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.473450 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.474617 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.474671 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.474695 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.475019 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.475077 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.475097 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.476327 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.476909 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.477848 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.477890 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.477902 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.493348 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.505065 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.546096 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.550447 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.563289 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.571752 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.590764 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.592201 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.592276 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.592297 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.592342 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.593195 5106 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.622561 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.622723 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.622852 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.622907 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623354 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623423 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623475 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623527 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623613 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623672 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623726 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623773 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.623946 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624036 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624076 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624162 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624264 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624334 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624396 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624409 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624420 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624521 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624555 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624649 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624878 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.624967 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.625185 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.626784 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.627254 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.631234 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: E0320 00:09:07.686150 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.725763 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.725853 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.725895 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.725998 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726051 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726112 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726191 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726210 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726251 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726207 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726339 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726368 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726394 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726416 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726422 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726438 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726465 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726482 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726490 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726513 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726514 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726543 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726553 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726603 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726627 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726638 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726647 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726673 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726689 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726703 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726698 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.726724 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.796142 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.806396 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.852836 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.864309 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.876570 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:07 crc kubenswrapper[5106]: W0320 00:09:07.879624 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-2bcd3158050675d56b6bea32f7ee75976a348eecdf71adca5f62238cd4d8ddb2 WatchSource:0}: Error finding container 2bcd3158050675d56b6bea32f7ee75976a348eecdf71adca5f62238cd4d8ddb2: Status 404 returned error can't find the container with id 2bcd3158050675d56b6bea32f7ee75976a348eecdf71adca5f62238cd4d8ddb2 Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.890165 5106 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 00:09:07 crc kubenswrapper[5106]: W0320 00:09:07.890498 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-2ab308f26ee7237218e1c0a04846a3db8a6d75386a30a81e7543086c9ed605ff WatchSource:0}: Error finding container 2ab308f26ee7237218e1c0a04846a3db8a6d75386a30a81e7543086c9ed605ff: Status 404 returned error can't find the container with id 2ab308f26ee7237218e1c0a04846a3db8a6d75386a30a81e7543086c9ed605ff Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.894896 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:07 crc kubenswrapper[5106]: W0320 00:09:07.926191 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-9f8dada33230f3277ca65faf622f9c3b9a0ef9df6dd5728506b1eeb8032d440f WatchSource:0}: Error finding container 9f8dada33230f3277ca65faf622f9c3b9a0ef9df6dd5728506b1eeb8032d440f: Status 404 returned error can't find the container with id 9f8dada33230f3277ca65faf622f9c3b9a0ef9df6dd5728506b1eeb8032d440f Mar 20 00:09:07 crc kubenswrapper[5106]: W0320 00:09:07.928843 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-7674387b9d40f82dbb946ec3baf74ec9b7e8f73f9251a5989247e4f86eb5f747 WatchSource:0}: Error finding container 7674387b9d40f82dbb946ec3baf74ec9b7e8f73f9251a5989247e4f86eb5f747: Status 404 returned error can't find the container with id 7674387b9d40f82dbb946ec3baf74ec9b7e8f73f9251a5989247e4f86eb5f747 Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.993988 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.999683 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.999753 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.999777 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:07 crc kubenswrapper[5106]: I0320 00:09:07.999826 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.000811 5106 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.037177 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.169389 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7674387b9d40f82dbb946ec3baf74ec9b7e8f73f9251a5989247e4f86eb5f747"} Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.170511 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"9f8dada33230f3277ca65faf622f9c3b9a0ef9df6dd5728506b1eeb8032d440f"} Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.171613 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d1a4e3e5219f9f7baefde5e8f52db73686504565e8387c80c148b8d32ec8d757"} Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.172860 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"2ab308f26ee7237218e1c0a04846a3db8a6d75386a30a81e7543086c9ed605ff"} Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.173846 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"2bcd3158050675d56b6bea32f7ee75976a348eecdf71adca5f62238cd4d8ddb2"} Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.347329 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.416355 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.636873 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.801344 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.803786 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.803844 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.803856 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.803889 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.804675 5106 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.823309 5106 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 20 00:09:08 crc kubenswrapper[5106]: E0320 00:09:08.824847 5106 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 20 00:09:08 crc kubenswrapper[5106]: I0320 00:09:08.908973 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:09 crc kubenswrapper[5106]: E0320 00:09:09.752911 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 20 00:09:09 crc kubenswrapper[5106]: E0320 00:09:09.881186 5106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e64158bf8a8a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:06.935974056 +0000 UTC m=+1.369708110,LastTimestamp:2026-03-20 00:09:06.935974056 +0000 UTC m=+1.369708110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:09 crc kubenswrapper[5106]: I0320 00:09:09.894030 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:09 crc kubenswrapper[5106]: E0320 00:09:09.948965 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Mar 20 00:09:10 crc kubenswrapper[5106]: E0320 00:09:10.172083 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.179760 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620" exitCode=0 Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.179825 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620"} Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.179938 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.180620 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.180670 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.180685 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:10 crc kubenswrapper[5106]: E0320 00:09:10.180947 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.182058 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7"} Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.182105 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd"} Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.184210 5106 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7" exitCode=0 Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.184340 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.184448 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.184683 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7"} Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.185148 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.185376 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.185393 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.185425 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.185481 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.185500 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:10 crc kubenswrapper[5106]: E0320 00:09:10.189291 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.190537 5106 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042" exitCode=0 Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.190601 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042"} Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.190752 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.191270 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.191290 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.191302 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:10 crc kubenswrapper[5106]: E0320 00:09:10.191444 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.194140 5106 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7" exitCode=0 Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.194231 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7"} Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.194437 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.195323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.195368 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.195380 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:10 crc kubenswrapper[5106]: E0320 00:09:10.195684 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.405506 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.406851 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.406891 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.406903 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.406943 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:10 crc kubenswrapper[5106]: E0320 00:09:10.407373 5106 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 20 00:09:10 crc kubenswrapper[5106]: I0320 00:09:10.894537 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:11 crc kubenswrapper[5106]: E0320 00:09:11.135492 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.198178 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878"} Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.198312 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.198949 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.198976 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.198986 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:11 crc kubenswrapper[5106]: E0320 00:09:11.199129 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.200678 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90"} Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.201873 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4"} Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.203337 5106 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407" exitCode=0 Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.203376 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407"} Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.203474 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.203815 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.203836 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.203845 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:11 crc kubenswrapper[5106]: E0320 00:09:11.203967 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:11 crc kubenswrapper[5106]: E0320 00:09:11.770433 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 20 00:09:11 crc kubenswrapper[5106]: I0320 00:09:11.894046 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.207106 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.207148 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.207157 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.209520 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.209553 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.209648 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.210344 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.210378 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.210390 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:12 crc kubenswrapper[5106]: E0320 00:09:12.210563 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.211940 5106 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34" exitCode=0 Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.212011 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.212129 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.212516 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.212544 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.212556 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:12 crc kubenswrapper[5106]: E0320 00:09:12.212766 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.215648 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.215726 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8"} Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.215692 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.215878 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.216655 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.216691 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.216704 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.216898 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.216947 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:12 crc kubenswrapper[5106]: E0320 00:09:12.216955 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.216967 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:12 crc kubenswrapper[5106]: E0320 00:09:12.217339 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:12 crc kubenswrapper[5106]: I0320 00:09:12.893533 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.055431 5106 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.057517 5106 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.150065 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.221417 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76"} Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.224633 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"956040f301a3708a45523578cb0345724a0643786063f86c23c9200085e0c602"} Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.224783 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.224843 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.224987 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225017 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225647 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225719 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225636 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225790 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225811 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225747 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225861 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225885 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.225920 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.226127 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.226427 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.226816 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.533235 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.607820 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.609223 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.609268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.609281 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.609303 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:13 crc kubenswrapper[5106]: E0320 00:09:13.609798 5106 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Mar 20 00:09:13 crc kubenswrapper[5106]: I0320 00:09:13.894227 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.212146 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.241478 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159"} Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.241560 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.241693 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.242672 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.242696 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.242758 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.242770 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.242706 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.242804 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:14 crc kubenswrapper[5106]: E0320 00:09:14.243234 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:14 crc kubenswrapper[5106]: E0320 00:09:14.243411 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:14 crc kubenswrapper[5106]: E0320 00:09:14.475988 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.809179 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:14 crc kubenswrapper[5106]: I0320 00:09:14.894329 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.139625 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.139687 5106 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.139753 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.248080 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3"} Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.248140 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932"} Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.248256 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.249357 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.249401 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:15 crc kubenswrapper[5106]: I0320 00:09:15.249412 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:15 crc kubenswrapper[5106]: E0320 00:09:15.249799 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.256142 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e"} Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.256250 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.256963 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.256995 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.257009 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:16 crc kubenswrapper[5106]: E0320 00:09:16.257258 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.258013 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.260090 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="956040f301a3708a45523578cb0345724a0643786063f86c23c9200085e0c602" exitCode=255 Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.260142 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"956040f301a3708a45523578cb0345724a0643786063f86c23c9200085e0c602"} Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.260271 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.260967 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.261055 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.261084 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:16 crc kubenswrapper[5106]: E0320 00:09:16.261821 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:16 crc kubenswrapper[5106]: I0320 00:09:16.262361 5106 scope.go:117] "RemoveContainer" containerID="956040f301a3708a45523578cb0345724a0643786063f86c23c9200085e0c602" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.265110 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.266612 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44"} Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.266713 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.266762 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.267884 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.267923 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.267938 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.267947 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.267993 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:17 crc kubenswrapper[5106]: I0320 00:09:17.268002 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:17 crc kubenswrapper[5106]: E0320 00:09:17.268341 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:17 crc kubenswrapper[5106]: E0320 00:09:17.268659 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:17 crc kubenswrapper[5106]: E0320 00:09:17.302043 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.270062 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.270232 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.270977 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.271023 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.271069 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:18 crc kubenswrapper[5106]: E0320 00:09:18.271669 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.309816 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.310148 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.311282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.311328 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.311350 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:18 crc kubenswrapper[5106]: E0320 00:09:18.311724 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.319751 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.814408 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.814692 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.815849 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.815923 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:18 crc kubenswrapper[5106]: I0320 00:09:18.815942 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:18 crc kubenswrapper[5106]: E0320 00:09:18.816674 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.273877 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.273958 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.274153 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.275015 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.275044 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.275056 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.275129 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.275154 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.275165 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:19 crc kubenswrapper[5106]: E0320 00:09:19.276084 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:19 crc kubenswrapper[5106]: E0320 00:09:19.276489 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.280107 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:19 crc kubenswrapper[5106]: I0320 00:09:19.662571 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.010975 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.012473 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.012520 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.012535 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.012558 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.276254 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.277361 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.277391 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:20 crc kubenswrapper[5106]: I0320 00:09:20.277402 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:20 crc kubenswrapper[5106]: E0320 00:09:20.277783 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:21 crc kubenswrapper[5106]: I0320 00:09:21.281328 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:21 crc kubenswrapper[5106]: I0320 00:09:21.282365 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:21 crc kubenswrapper[5106]: I0320 00:09:21.282441 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:21 crc kubenswrapper[5106]: I0320 00:09:21.282468 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:21 crc kubenswrapper[5106]: E0320 00:09:21.283026 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:21 crc kubenswrapper[5106]: I0320 00:09:21.765635 5106 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Mar 20 00:09:21 crc kubenswrapper[5106]: I0320 00:09:21.821891 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:22 crc kubenswrapper[5106]: I0320 00:09:22.284319 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:22 crc kubenswrapper[5106]: I0320 00:09:22.285095 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:22 crc kubenswrapper[5106]: I0320 00:09:22.285137 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:22 crc kubenswrapper[5106]: I0320 00:09:22.285152 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:22 crc kubenswrapper[5106]: E0320 00:09:22.285421 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:22 crc kubenswrapper[5106]: I0320 00:09:22.663115 5106 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 00:09:22 crc kubenswrapper[5106]: I0320 00:09:22.663197 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.545818 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.546156 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.547171 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.547214 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.547231 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:23 crc kubenswrapper[5106]: E0320 00:09:23.547730 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.585344 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.833653 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.963048 5106 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 00:09:23 crc kubenswrapper[5106]: I0320 00:09:23.963121 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 00:09:24 crc kubenswrapper[5106]: I0320 00:09:24.289864 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:24 crc kubenswrapper[5106]: I0320 00:09:24.290815 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:24 crc kubenswrapper[5106]: I0320 00:09:24.290877 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:24 crc kubenswrapper[5106]: I0320 00:09:24.290897 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:24 crc kubenswrapper[5106]: E0320 00:09:24.291554 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.292447 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.293217 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.293258 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.293269 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:25 crc kubenswrapper[5106]: E0320 00:09:25.293692 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.746093 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.746504 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.748007 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.748075 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.748095 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:25 crc kubenswrapper[5106]: E0320 00:09:25.748753 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:25 crc kubenswrapper[5106]: I0320 00:09:25.754014 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:26 crc kubenswrapper[5106]: I0320 00:09:26.295661 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:26 crc kubenswrapper[5106]: I0320 00:09:26.296530 5106 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Mar 20 00:09:26 crc kubenswrapper[5106]: I0320 00:09:26.296621 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:26 crc kubenswrapper[5106]: I0320 00:09:26.296627 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Mar 20 00:09:26 crc kubenswrapper[5106]: I0320 00:09:26.296660 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:26 crc kubenswrapper[5106]: I0320 00:09:26.296672 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:26 crc kubenswrapper[5106]: E0320 00:09:26.297158 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:26 crc kubenswrapper[5106]: E0320 00:09:26.870219 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 20 00:09:27 crc kubenswrapper[5106]: E0320 00:09:27.302503 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:09:28 crc kubenswrapper[5106]: I0320 00:09:28.940990 5106 trace.go:236] Trace[2040043410]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Mar-2026 00:09:17.946) (total time: 10994ms): Mar 20 00:09:28 crc kubenswrapper[5106]: Trace[2040043410]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 10994ms (00:09:28.940) Mar 20 00:09:28 crc kubenswrapper[5106]: Trace[2040043410]: [10.994700135s] [10.994700135s] END Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.941045 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.940999 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e64158bf8a8a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:06.935974056 +0000 UTC m=+1.369708110,LastTimestamp:2026-03-20 00:09:06.935974056 +0000 UTC m=+1.369708110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: I0320 00:09:28.941356 5106 trace.go:236] Trace[1577395767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Mar-2026 00:09:15.915) (total time: 13025ms): Mar 20 00:09:28 crc kubenswrapper[5106]: Trace[1577395767]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 13025ms (00:09:28.941) Mar 20 00:09:28 crc kubenswrapper[5106]: Trace[1577395767]: [13.025937226s] [13.025937226s] END Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.941398 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 20 00:09:28 crc kubenswrapper[5106]: I0320 00:09:28.942193 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.942281 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.942943 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.943002 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.943186 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.946515 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.951464 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.954782 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415a0fbe3d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.288507348 +0000 UTC m=+1.722241412,LastTimestamp:2026-03-20 00:09:07.288507348 +0000 UTC m=+1.722241412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.958257 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.389013038 +0000 UTC m=+1.822747132,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.963000 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.389060959 +0000 UTC m=+1.822795053,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.968533 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905d1817\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.38908632 +0000 UTC m=+1.822820414,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.973966 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.462790291 +0000 UTC m=+1.896524335,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: I0320 00:09:28.975541 5106 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.982244 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.462821572 +0000 UTC m=+1.896555626,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.990611 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905d1817\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.462832882 +0000 UTC m=+1.896566936,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:28 crc kubenswrapper[5106]: E0320 00:09:28.998339 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.465087967 +0000 UTC m=+1.898822091,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.004526 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.465126218 +0000 UTC m=+1.898860302,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.015115 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.465137368 +0000 UTC m=+1.898871422,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.019609 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905d1817\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.465151939 +0000 UTC m=+1.898886023,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.031856 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.465308263 +0000 UTC m=+1.899042317,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.044435 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905d1817\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.465328643 +0000 UTC m=+1.899062707,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.049541 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.467803114 +0000 UTC m=+1.901537218,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.054204 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.467856625 +0000 UTC m=+1.901590719,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.059699 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905d1817\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.467916337 +0000 UTC m=+1.901650431,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.076842 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.470161062 +0000 UTC m=+1.903895156,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.083672 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.470193733 +0000 UTC m=+1.903927827,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.091325 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905d1817\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905d1817 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009665047 +0000 UTC m=+1.443399101,LastTimestamp:2026-03-20 00:09:07.470214593 +0000 UTC m=+1.903948687,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.097998 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905c7fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905c7fbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009626046 +0000 UTC m=+1.443360090,LastTimestamp:2026-03-20 00:09:07.471509585 +0000 UTC m=+1.905243669,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.103757 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e6415905ceb16\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e6415905ceb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.009653526 +0000 UTC m=+1.443387570,LastTimestamp:2026-03-20 00:09:07.471534136 +0000 UTC m=+1.905268220,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.111610 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6415c4e3d75e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.89091107 +0000 UTC m=+2.324645155,LastTimestamp:2026-03-20 00:09:07.89091107 +0000 UTC m=+2.324645155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.119938 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e6415c5f7f66b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.909006955 +0000 UTC m=+2.342741049,LastTimestamp:2026-03-20 00:09:07.909006955 +0000 UTC m=+2.342741049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.134447 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6415c6ba3828 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.921737768 +0000 UTC m=+2.355471822,LastTimestamp:2026-03-20 00:09:07.921737768 +0000 UTC m=+2.355471822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.140414 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e6415c756dc52 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.93200341 +0000 UTC m=+2.365737504,LastTimestamp:2026-03-20 00:09:07.93200341 +0000 UTC m=+2.365737504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.150740 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6415c7bcbc6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:07.938679914 +0000 UTC m=+2.372413968,LastTimestamp:2026-03-20 00:09:07.938679914 +0000 UTC m=+2.372413968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.155208 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e64162b58e5ad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.609858477 +0000 UTC m=+4.043592571,LastTimestamp:2026-03-20 00:09:09.609858477 +0000 UTC m=+4.043592571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.159888 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e64162b5e4312 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.610210066 +0000 UTC m=+4.043944120,LastTimestamp:2026-03-20 00:09:09.610210066 +0000 UTC m=+4.043944120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.164143 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e64162b5f9e62 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.610298978 +0000 UTC m=+4.044033072,LastTimestamp:2026-03-20 00:09:09.610298978 +0000 UTC m=+4.044033072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.168528 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64162b6d6792 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.61120245 +0000 UTC m=+4.044936504,LastTimestamp:2026-03-20 00:09:09.61120245 +0000 UTC m=+4.044936504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.172799 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e64162b7350da openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.61158985 +0000 UTC m=+4.045323924,LastTimestamp:2026-03-20 00:09:09.61158985 +0000 UTC m=+4.045323924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.177163 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e64162d56074b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.643224907 +0000 UTC m=+4.076959001,LastTimestamp:2026-03-20 00:09:09.643224907 +0000 UTC m=+4.076959001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.184171 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e64162e43117c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.658759548 +0000 UTC m=+4.092493642,LastTimestamp:2026-03-20 00:09:09.658759548 +0000 UTC m=+4.092493642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.188271 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64162e438699 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.658789529 +0000 UTC m=+4.092523623,LastTimestamp:2026-03-20 00:09:09.658789529 +0000 UTC m=+4.092523623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.192863 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e64162e4adfdb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.659271131 +0000 UTC m=+4.093005185,LastTimestamp:2026-03-20 00:09:09.659271131 +0000 UTC m=+4.093005185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.197729 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e64162e506374 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.6596325 +0000 UTC m=+4.093366554,LastTimestamp:2026-03-20 00:09:09.6596325 +0000 UTC m=+4.093366554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.202592 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e64162e64584a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.660940362 +0000 UTC m=+4.094674446,LastTimestamp:2026-03-20 00:09:09.660940362 +0000 UTC m=+4.094674446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.207352 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e64163e7213b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.93027576 +0000 UTC m=+4.364009814,LastTimestamp:2026-03-20 00:09:09.93027576 +0000 UTC m=+4.364009814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.211979 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e641640edade2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.971930594 +0000 UTC m=+4.405664678,LastTimestamp:2026-03-20 00:09:09.971930594 +0000 UTC m=+4.405664678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.215785 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e641640ffb509 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:09.973112073 +0000 UTC m=+4.406846117,LastTimestamp:2026-03-20 00:09:09.973112073 +0000 UTC m=+4.406846117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.220696 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e64164d93ea56 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.184151638 +0000 UTC m=+4.617885712,LastTimestamp:2026-03-20 00:09:10.184151638 +0000 UTC m=+4.617885712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.227736 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64164e0af83c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.19195398 +0000 UTC m=+4.625688054,LastTimestamp:2026-03-20 00:09:10.19195398 +0000 UTC m=+4.625688054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.232907 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e64164e0fcf90 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.192271248 +0000 UTC m=+4.626005302,LastTimestamp:2026-03-20 00:09:10.192271248 +0000 UTC m=+4.626005302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.238659 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e64164e6dce6b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.198431339 +0000 UTC m=+4.632165423,LastTimestamp:2026-03-20 00:09:10.198431339 +0000 UTC m=+4.632165423,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.243626 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e6416699651b9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.654071225 +0000 UTC m=+5.087805299,LastTimestamp:2026-03-20 00:09:10.654071225 +0000 UTC m=+5.087805299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.249148 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64166d568ecd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.717001421 +0000 UTC m=+5.150735485,LastTimestamp:2026-03-20 00:09:10.717001421 +0000 UTC m=+5.150735485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.253181 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e64166d67235c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.718088028 +0000 UTC m=+5.151822082,LastTimestamp:2026-03-20 00:09:10.718088028 +0000 UTC m=+5.151822082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.261508 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e64166d736018 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.718890008 +0000 UTC m=+5.152624072,LastTimestamp:2026-03-20 00:09:10.718890008 +0000 UTC m=+5.152624072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.266531 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e64166d7c4952 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.719474002 +0000 UTC m=+5.153208056,LastTimestamp:2026-03-20 00:09:10.719474002 +0000 UTC m=+5.153208056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.272991 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416726b7e0a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.802259466 +0000 UTC m=+5.235993530,LastTimestamp:2026-03-20 00:09:10.802259466 +0000 UTC m=+5.235993530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.281797 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641673096011 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.812606481 +0000 UTC m=+5.246340535,LastTimestamp:2026-03-20 00:09:10.812606481 +0000 UTC m=+5.246340535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.287375 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6416730ba8d8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.812756184 +0000 UTC m=+5.246490238,LastTimestamp:2026-03-20 00:09:10.812756184 +0000 UTC m=+5.246490238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.292313 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416731ddce2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.813949154 +0000 UTC m=+5.247683248,LastTimestamp:2026-03-20 00:09:10.813949154 +0000 UTC m=+5.247683248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.296315 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6416732203a4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:10.81422122 +0000 UTC m=+5.247955264,LastTimestamp:2026-03-20 00:09:10.81422122 +0000 UTC m=+5.247955264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.303251 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64168a696d15 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.204777237 +0000 UTC m=+5.638511291,LastTimestamp:2026-03-20 00:09:11.204777237 +0000 UTC m=+5.638511291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.310222 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e64169a477c3c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.470988348 +0000 UTC m=+5.904722422,LastTimestamp:2026-03-20 00:09:11.470988348 +0000 UTC m=+5.904722422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.318620 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e64169d3157b2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.51986885 +0000 UTC m=+5.953602914,LastTimestamp:2026-03-20 00:09:11.51986885 +0000 UTC m=+5.953602914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.328443 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6416a07e38fe openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.57523891 +0000 UTC m=+6.008972974,LastTimestamp:2026-03-20 00:09:11.57523891 +0000 UTC m=+6.008972974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.333062 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416a07e082a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.57522641 +0000 UTC m=+6.008960484,LastTimestamp:2026-03-20 00:09:11.57522641 +0000 UTC m=+6.008960484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.336948 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416a080f408 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.575417864 +0000 UTC m=+6.009151938,LastTimestamp:2026-03-20 00:09:11.575417864 +0000 UTC m=+6.009151938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.341191 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6416a093f49b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.576663195 +0000 UTC m=+6.010397259,LastTimestamp:2026-03-20 00:09:11.576663195 +0000 UTC m=+6.010397259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.345742 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e6416a10f27db openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.584737243 +0000 UTC m=+6.018471297,LastTimestamp:2026-03-20 00:09:11.584737243 +0000 UTC m=+6.018471297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.349816 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e6416a15094fb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.589025019 +0000 UTC m=+6.022759073,LastTimestamp:2026-03-20 00:09:11.589025019 +0000 UTC m=+6.022759073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.353550 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416a6e7c7d5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.682820053 +0000 UTC m=+6.116554127,LastTimestamp:2026-03-20 00:09:11.682820053 +0000 UTC m=+6.116554127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.358042 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416a6f6e3c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.683810248 +0000 UTC m=+6.117544302,LastTimestamp:2026-03-20 00:09:11.683810248 +0000 UTC m=+6.117544302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.365216 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416a76f849e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.691715742 +0000 UTC m=+6.125449796,LastTimestamp:2026-03-20 00:09:11.691715742 +0000 UTC m=+6.125449796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.368016 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6416b12dc960 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.855180128 +0000 UTC m=+6.288914192,LastTimestamp:2026-03-20 00:09:11.855180128 +0000 UTC m=+6.288914192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.370540 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e6416b12e7253 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.855223379 +0000 UTC m=+6.288957433,LastTimestamp:2026-03-20 00:09:11.855223379 +0000 UTC m=+6.288957433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.372692 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e6416b24a5b7c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.873829756 +0000 UTC m=+6.307563810,LastTimestamp:2026-03-20 00:09:11.873829756 +0000 UTC m=+6.307563810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.374232 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e6416b2777ed9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.876787929 +0000 UTC m=+6.310521983,LastTimestamp:2026-03-20 00:09:11.876787929 +0000 UTC m=+6.310521983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.376696 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416b48e75e8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.9118474 +0000 UTC m=+6.345581454,LastTimestamp:2026-03-20 00:09:11.9118474 +0000 UTC m=+6.345581454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.380603 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416b6a75115 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.947030805 +0000 UTC m=+6.380764859,LastTimestamp:2026-03-20 00:09:11.947030805 +0000 UTC m=+6.380764859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.386400 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416b6b59bec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:11.947967468 +0000 UTC m=+6.381701522,LastTimestamp:2026-03-20 00:09:11.947967468 +0000 UTC m=+6.381701522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.389919 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416c433daa6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.17434487 +0000 UTC m=+6.608078924,LastTimestamp:2026-03-20 00:09:12.17434487 +0000 UTC m=+6.608078924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.394075 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416c5795bf6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.195677174 +0000 UTC m=+6.629411228,LastTimestamp:2026-03-20 00:09:12.195677174 +0000 UTC m=+6.629411228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.398518 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416c58a729a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.196797082 +0000 UTC m=+6.630531136,LastTimestamp:2026-03-20 00:09:12.196797082 +0000 UTC m=+6.630531136,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.403343 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416c68d820b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.213774859 +0000 UTC m=+6.647508913,LastTimestamp:2026-03-20 00:09:12.213774859 +0000 UTC m=+6.647508913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.408219 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416ddfbd018 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.606879768 +0000 UTC m=+7.040613822,LastTimestamp:2026-03-20 00:09:12.606879768 +0000 UTC m=+7.040613822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.412023 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416de1df1bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.609116603 +0000 UTC m=+7.042850657,LastTimestamp:2026-03-20 00:09:12.609116603 +0000 UTC m=+7.042850657,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.415448 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416df89be0f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.632958479 +0000 UTC m=+7.066692533,LastTimestamp:2026-03-20 00:09:12.632958479 +0000 UTC m=+7.066692533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.419110 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6416df9bd954 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.634145108 +0000 UTC m=+7.067879172,LastTimestamp:2026-03-20 00:09:12.634145108 +0000 UTC m=+7.067879172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.422472 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416dfc07fd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.636547027 +0000 UTC m=+7.070281081,LastTimestamp:2026-03-20 00:09:12.636547027 +0000 UTC m=+7.070281081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.426876 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64172f0b0bf1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:13.966832625 +0000 UTC m=+8.400566669,LastTimestamp:2026-03-20 00:09:13.966832625 +0000 UTC m=+8.400566669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.430880 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e641737c8daf0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:14.113489648 +0000 UTC m=+8.547223702,LastTimestamp:2026-03-20 00:09:14.113489648 +0000 UTC m=+8.547223702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.435819 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e641737dc74cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:14.11477422 +0000 UTC m=+8.548508274,LastTimestamp:2026-03-20 00:09:14.11477422 +0000 UTC m=+8.548508274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.440067 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64174aa2f5ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:14.42977323 +0000 UTC m=+8.863507284,LastTimestamp:2026-03-20 00:09:14.42977323 +0000 UTC m=+8.863507284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.447673 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64174b6280de openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:14.442326238 +0000 UTC m=+8.876060292,LastTimestamp:2026-03-20 00:09:14.442326238 +0000 UTC m=+8.876060292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.451763 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64174b714c62 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:14.443295842 +0000 UTC m=+8.877029896,LastTimestamp:2026-03-20 00:09:14.443295842 +0000 UTC m=+8.877029896,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.458059 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 00:09:29 crc kubenswrapper[5106]: &Event{ObjectMeta:{kube-apiserver-crc.189e641774f4176b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": dial tcp 192.168.126.11:6443: connect: connection refused Mar 20 00:09:29 crc kubenswrapper[5106]: body: Mar 20 00:09:29 crc kubenswrapper[5106]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.139733355 +0000 UTC m=+9.573467409,LastTimestamp:2026-03-20 00:09:15.139733355 +0000 UTC m=+9.573467409,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 00:09:29 crc kubenswrapper[5106]: > Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.463797 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641774f50989 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.139795337 +0000 UTC m=+9.573529391,LastTimestamp:2026-03-20 00:09:15.139795337 +0000 UTC m=+9.573529391,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.468486 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e641779ff6b9c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.224361884 +0000 UTC m=+9.658095938,LastTimestamp:2026-03-20 00:09:15.224361884 +0000 UTC m=+9.658095938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.470026 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e641782c1e8f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.371325685 +0000 UTC m=+9.805059739,LastTimestamp:2026-03-20 00:09:15.371325685 +0000 UTC m=+9.805059739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.475521 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64178398c605 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.385406981 +0000 UTC m=+9.819141035,LastTimestamp:2026-03-20 00:09:15.385406981 +0000 UTC m=+9.819141035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.479771 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e64179e37d351 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.832038225 +0000 UTC m=+10.265772319,LastTimestamp:2026-03-20 00:09:15.832038225 +0000 UTC m=+10.265772319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.484706 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e6417a08fae95 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:15.871350421 +0000 UTC m=+10.305084485,LastTimestamp:2026-03-20 00:09:15.871350421 +0000 UTC m=+10.305084485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.494695 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e6416c58a729a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416c58a729a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.196797082 +0000 UTC m=+6.630531136,LastTimestamp:2026-03-20 00:09:16.263932138 +0000 UTC m=+10.697666182,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.498879 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e6416de1df1bb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416de1df1bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.609116603 +0000 UTC m=+7.042850657,LastTimestamp:2026-03-20 00:09:16.509122252 +0000 UTC m=+10.942856316,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.503660 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e6416dfc07fd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416dfc07fd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.636547027 +0000 UTC m=+7.070281081,LastTimestamp:2026-03-20 00:09:16.523785353 +0000 UTC m=+10.957519427,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.509197 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 00:09:29 crc kubenswrapper[5106]: &Event{ObjectMeta:{kube-controller-manager-crc.189e641935628be5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 00:09:29 crc kubenswrapper[5106]: body: Mar 20 00:09:29 crc kubenswrapper[5106]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:22.663164901 +0000 UTC m=+17.096898955,LastTimestamp:2026-03-20 00:09:22.663164901 +0000 UTC m=+17.096898955,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 00:09:29 crc kubenswrapper[5106]: > Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.513186 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e641935636322 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:22.663220002 +0000 UTC m=+17.096954046,LastTimestamp:2026-03-20 00:09:22.663220002 +0000 UTC m=+17.096954046,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.518553 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 00:09:29 crc kubenswrapper[5106]: &Event{ObjectMeta:{kube-apiserver-crc.189e641982ddfce9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 00:09:29 crc kubenswrapper[5106]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 00:09:29 crc kubenswrapper[5106]: Mar 20 00:09:29 crc kubenswrapper[5106]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:23.963100393 +0000 UTC m=+18.396834447,LastTimestamp:2026-03-20 00:09:23.963100393 +0000 UTC m=+18.396834447,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 00:09:29 crc kubenswrapper[5106]: > Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.527403 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641982de9718 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:23.963139864 +0000 UTC m=+18.396873918,LastTimestamp:2026-03-20 00:09:23.963139864 +0000 UTC m=+18.396873918,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.532969 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 00:09:29 crc kubenswrapper[5106]: &Event{ObjectMeta:{kube-apiserver-crc.189e641a0df3f676 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Mar 20 00:09:29 crc kubenswrapper[5106]: body: Mar 20 00:09:29 crc kubenswrapper[5106]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:26.296573558 +0000 UTC m=+20.730307612,LastTimestamp:2026-03-20 00:09:26.296573558 +0000 UTC m=+20.730307612,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 00:09:29 crc kubenswrapper[5106]: > Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.537257 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641a0df52ae8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:26.29665252 +0000 UTC m=+20.730386574,LastTimestamp:2026-03-20 00:09:26.29665252 +0000 UTC m=+20.730386574,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.668493 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.668702 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.669674 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.669709 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.669720 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:29 crc kubenswrapper[5106]: E0320 00:09:29.669971 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.673896 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:09:29 crc kubenswrapper[5106]: I0320 00:09:29.897837 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.307742 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.308356 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.310359 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44" exitCode=255 Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.310464 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44"} Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.310606 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.310618 5106 scope.go:117] "RemoveContainer" containerID="956040f301a3708a45523578cb0345724a0643786063f86c23c9200085e0c602" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.310857 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.311605 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.311634 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.311653 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.311672 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.311687 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.311710 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:30 crc kubenswrapper[5106]: E0320 00:09:30.313124 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:30 crc kubenswrapper[5106]: E0320 00:09:30.316469 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.316872 5106 scope.go:117] "RemoveContainer" containerID="f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44" Mar 20 00:09:30 crc kubenswrapper[5106]: E0320 00:09:30.317462 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:09:30 crc kubenswrapper[5106]: E0320 00:09:30.327533 5106 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641afd9c2e37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:30.317352503 +0000 UTC m=+24.751086557,LastTimestamp:2026-03-20 00:09:30.317352503 +0000 UTC m=+24.751086557,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:30 crc kubenswrapper[5106]: I0320 00:09:30.897597 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:31 crc kubenswrapper[5106]: I0320 00:09:31.314106 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Mar 20 00:09:32 crc kubenswrapper[5106]: I0320 00:09:31.897644 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:32 crc kubenswrapper[5106]: I0320 00:09:32.898081 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:33 crc kubenswrapper[5106]: I0320 00:09:33.897902 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.847263 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.847484 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.848291 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.848343 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.848357 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:34 crc kubenswrapper[5106]: E0320 00:09:34.848817 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.849109 5106 scope.go:117] "RemoveContainer" containerID="f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44" Mar 20 00:09:34 crc kubenswrapper[5106]: E0320 00:09:34.849326 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:09:34 crc kubenswrapper[5106]: E0320 00:09:34.853943 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e641afd9c2e37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641afd9c2e37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:30.317352503 +0000 UTC m=+24.751086557,LastTimestamp:2026-03-20 00:09:34.849293313 +0000 UTC m=+29.283027367,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:34 crc kubenswrapper[5106]: I0320 00:09:34.897873 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:35 crc kubenswrapper[5106]: I0320 00:09:35.897966 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:35 crc kubenswrapper[5106]: I0320 00:09:35.943469 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:35 crc kubenswrapper[5106]: I0320 00:09:35.944368 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:35 crc kubenswrapper[5106]: I0320 00:09:35.944420 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:35 crc kubenswrapper[5106]: I0320 00:09:35.944433 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:35 crc kubenswrapper[5106]: I0320 00:09:35.944455 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:35 crc kubenswrapper[5106]: E0320 00:09:35.948142 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:09:35 crc kubenswrapper[5106]: E0320 00:09:35.952746 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:09:36 crc kubenswrapper[5106]: I0320 00:09:36.901642 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:37 crc kubenswrapper[5106]: E0320 00:09:37.303093 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:09:37 crc kubenswrapper[5106]: I0320 00:09:37.901322 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:38 crc kubenswrapper[5106]: E0320 00:09:38.513422 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 20 00:09:38 crc kubenswrapper[5106]: I0320 00:09:38.901760 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:39 crc kubenswrapper[5106]: I0320 00:09:39.899822 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:40 crc kubenswrapper[5106]: E0320 00:09:40.042231 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 20 00:09:40 crc kubenswrapper[5106]: E0320 00:09:40.526419 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 20 00:09:40 crc kubenswrapper[5106]: I0320 00:09:40.897388 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:41 crc kubenswrapper[5106]: I0320 00:09:41.897867 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:42 crc kubenswrapper[5106]: I0320 00:09:42.897020 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:42 crc kubenswrapper[5106]: I0320 00:09:42.952808 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:42 crc kubenswrapper[5106]: E0320 00:09:42.952855 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:09:42 crc kubenswrapper[5106]: I0320 00:09:42.953541 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:42 crc kubenswrapper[5106]: I0320 00:09:42.953661 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:42 crc kubenswrapper[5106]: I0320 00:09:42.953695 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:42 crc kubenswrapper[5106]: I0320 00:09:42.953757 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:42 crc kubenswrapper[5106]: E0320 00:09:42.962125 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:09:43 crc kubenswrapper[5106]: I0320 00:09:43.898823 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:44 crc kubenswrapper[5106]: I0320 00:09:44.899530 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:45 crc kubenswrapper[5106]: I0320 00:09:45.900083 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:46 crc kubenswrapper[5106]: I0320 00:09:46.898784 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:47 crc kubenswrapper[5106]: E0320 00:09:47.303468 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:09:47 crc kubenswrapper[5106]: I0320 00:09:47.899063 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:48 crc kubenswrapper[5106]: I0320 00:09:48.900553 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:49 crc kubenswrapper[5106]: E0320 00:09:49.505828 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 20 00:09:49 crc kubenswrapper[5106]: I0320 00:09:49.898702 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:49 crc kubenswrapper[5106]: E0320 00:09:49.958429 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:09:49 crc kubenswrapper[5106]: I0320 00:09:49.962355 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:49 crc kubenswrapper[5106]: I0320 00:09:49.963254 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:49 crc kubenswrapper[5106]: I0320 00:09:49.963306 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:49 crc kubenswrapper[5106]: I0320 00:09:49.963323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:49 crc kubenswrapper[5106]: I0320 00:09:49.963351 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:49 crc kubenswrapper[5106]: E0320 00:09:49.975270 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.160025 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.161000 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.161058 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.161069 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:50 crc kubenswrapper[5106]: E0320 00:09:50.161408 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.161712 5106 scope.go:117] "RemoveContainer" containerID="f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44" Mar 20 00:09:50 crc kubenswrapper[5106]: E0320 00:09:50.175009 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e6416c58a729a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416c58a729a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.196797082 +0000 UTC m=+6.630531136,LastTimestamp:2026-03-20 00:09:50.162680436 +0000 UTC m=+44.596414500,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:50 crc kubenswrapper[5106]: E0320 00:09:50.339142 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e6416de1df1bb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416de1df1bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.609116603 +0000 UTC m=+7.042850657,LastTimestamp:2026-03-20 00:09:50.334370898 +0000 UTC m=+44.768104952,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:50 crc kubenswrapper[5106]: E0320 00:09:50.349301 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e6416dfc07fd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e6416dfc07fd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:12.636547027 +0000 UTC m=+7.070281081,LastTimestamp:2026-03-20 00:09:50.344911492 +0000 UTC m=+44.778645546,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.834157 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.835510 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb"} Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.835704 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.836164 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.836192 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.836200 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:50 crc kubenswrapper[5106]: E0320 00:09:50.836448 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:50 crc kubenswrapper[5106]: I0320 00:09:50.897628 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.838750 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.839094 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.840330 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb" exitCode=255 Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.840388 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb"} Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.840419 5106 scope.go:117] "RemoveContainer" containerID="f62e9b6ad97be5fe5b845a463847df1e26901970c36fe78367794d78d5814a44" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.840559 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.841063 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.841121 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.841134 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:51 crc kubenswrapper[5106]: E0320 00:09:51.841442 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.841699 5106 scope.go:117] "RemoveContainer" containerID="625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb" Mar 20 00:09:51 crc kubenswrapper[5106]: E0320 00:09:51.841879 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:09:51 crc kubenswrapper[5106]: E0320 00:09:51.844149 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e641afd9c2e37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641afd9c2e37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:30.317352503 +0000 UTC m=+24.751086557,LastTimestamp:2026-03-20 00:09:51.84185676 +0000 UTC m=+46.275590814,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:51 crc kubenswrapper[5106]: I0320 00:09:51.894502 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:52 crc kubenswrapper[5106]: I0320 00:09:52.846490 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Mar 20 00:09:52 crc kubenswrapper[5106]: I0320 00:09:52.898917 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:53 crc kubenswrapper[5106]: I0320 00:09:53.901427 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.846317 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.847413 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.849038 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.849109 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.849132 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:54 crc kubenswrapper[5106]: E0320 00:09:54.849593 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.849924 5106 scope.go:117] "RemoveContainer" containerID="625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb" Mar 20 00:09:54 crc kubenswrapper[5106]: E0320 00:09:54.850157 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:09:54 crc kubenswrapper[5106]: E0320 00:09:54.852838 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e641afd9c2e37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641afd9c2e37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:30.317352503 +0000 UTC m=+24.751086557,LastTimestamp:2026-03-20 00:09:54.85012015 +0000 UTC m=+49.283854214,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:09:54 crc kubenswrapper[5106]: I0320 00:09:54.895024 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:55 crc kubenswrapper[5106]: I0320 00:09:55.901666 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:56 crc kubenswrapper[5106]: I0320 00:09:56.895042 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:56 crc kubenswrapper[5106]: E0320 00:09:56.964937 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:09:56 crc kubenswrapper[5106]: I0320 00:09:56.976185 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:09:56 crc kubenswrapper[5106]: I0320 00:09:56.977257 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:09:56 crc kubenswrapper[5106]: I0320 00:09:56.977325 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:09:56 crc kubenswrapper[5106]: I0320 00:09:56.977345 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:09:56 crc kubenswrapper[5106]: I0320 00:09:56.977379 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:09:56 crc kubenswrapper[5106]: E0320 00:09:56.989382 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:09:57 crc kubenswrapper[5106]: E0320 00:09:57.303992 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:09:57 crc kubenswrapper[5106]: I0320 00:09:57.899953 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:58 crc kubenswrapper[5106]: I0320 00:09:58.906565 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:09:59 crc kubenswrapper[5106]: I0320 00:09:59.897167 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.836725 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.837079 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.838031 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.838074 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.838083 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:00 crc kubenswrapper[5106]: E0320 00:10:00.838408 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.838651 5106 scope.go:117] "RemoveContainer" containerID="625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb" Mar 20 00:10:00 crc kubenswrapper[5106]: E0320 00:10:00.838861 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:00 crc kubenswrapper[5106]: E0320 00:10:00.845208 5106 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e641afd9c2e37\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e641afd9c2e37 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:09:30.317352503 +0000 UTC m=+24.751086557,LastTimestamp:2026-03-20 00:10:00.838829349 +0000 UTC m=+55.272563403,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 00:10:00 crc kubenswrapper[5106]: I0320 00:10:00.901532 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:01 crc kubenswrapper[5106]: I0320 00:10:01.903511 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:01 crc kubenswrapper[5106]: E0320 00:10:01.981562 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 20 00:10:02 crc kubenswrapper[5106]: I0320 00:10:02.899247 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:03 crc kubenswrapper[5106]: E0320 00:10:03.633895 5106 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 20 00:10:03 crc kubenswrapper[5106]: I0320 00:10:03.900552 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:03 crc kubenswrapper[5106]: E0320 00:10:03.974396 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:10:03 crc kubenswrapper[5106]: I0320 00:10:03.989864 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:03 crc kubenswrapper[5106]: I0320 00:10:03.991041 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:03 crc kubenswrapper[5106]: I0320 00:10:03.991080 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:03 crc kubenswrapper[5106]: I0320 00:10:03.991089 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:03 crc kubenswrapper[5106]: I0320 00:10:03.991111 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:10:03 crc kubenswrapper[5106]: E0320 00:10:03.997654 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:10:04 crc kubenswrapper[5106]: I0320 00:10:04.247921 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:10:04 crc kubenswrapper[5106]: I0320 00:10:04.248202 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:04 crc kubenswrapper[5106]: I0320 00:10:04.249219 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:04 crc kubenswrapper[5106]: I0320 00:10:04.249285 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:04 crc kubenswrapper[5106]: I0320 00:10:04.249307 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:04 crc kubenswrapper[5106]: E0320 00:10:04.249967 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:10:04 crc kubenswrapper[5106]: I0320 00:10:04.901349 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:05 crc kubenswrapper[5106]: I0320 00:10:05.899307 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:06 crc kubenswrapper[5106]: I0320 00:10:06.901023 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:07 crc kubenswrapper[5106]: E0320 00:10:07.304412 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:10:07 crc kubenswrapper[5106]: I0320 00:10:07.898154 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:08 crc kubenswrapper[5106]: I0320 00:10:08.899684 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:09 crc kubenswrapper[5106]: I0320 00:10:09.898949 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:10 crc kubenswrapper[5106]: I0320 00:10:10.901116 5106 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 00:10:10 crc kubenswrapper[5106]: E0320 00:10:10.981466 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 00:10:10 crc kubenswrapper[5106]: I0320 00:10:10.997768 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:10 crc kubenswrapper[5106]: I0320 00:10:10.998912 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:10 crc kubenswrapper[5106]: I0320 00:10:10.998982 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:10 crc kubenswrapper[5106]: I0320 00:10:10.999007 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:10 crc kubenswrapper[5106]: I0320 00:10:10.999044 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:10:11 crc kubenswrapper[5106]: E0320 00:10:11.009178 5106 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 00:10:11 crc kubenswrapper[5106]: I0320 00:10:11.640165 5106 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-87wc9" Mar 20 00:10:11 crc kubenswrapper[5106]: I0320 00:10:11.649637 5106 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-87wc9" Mar 20 00:10:11 crc kubenswrapper[5106]: I0320 00:10:11.702951 5106 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 20 00:10:12 crc kubenswrapper[5106]: I0320 00:10:12.517838 5106 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 20 00:10:12 crc kubenswrapper[5106]: I0320 00:10:12.651185 5106 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-04-19 00:05:11 +0000 UTC" deadline="2026-04-14 04:10:34.851593192 +0000 UTC" Mar 20 00:10:12 crc kubenswrapper[5106]: I0320 00:10:12.651291 5106 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="604h0m22.200306947s" Mar 20 00:10:13 crc kubenswrapper[5106]: I0320 00:10:13.425423 5106 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:10:15 crc kubenswrapper[5106]: I0320 00:10:15.160146 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:15 crc kubenswrapper[5106]: I0320 00:10:15.161486 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:15 crc kubenswrapper[5106]: I0320 00:10:15.161512 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:15 crc kubenswrapper[5106]: I0320 00:10:15.161522 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:15 crc kubenswrapper[5106]: E0320 00:10:15.161824 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.160820 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.162162 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.162209 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.162221 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:16 crc kubenswrapper[5106]: E0320 00:10:16.162653 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.162895 5106 scope.go:117] "RemoveContainer" containerID="625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.914120 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.915677 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d"} Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.915866 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.916912 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.916937 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:16 crc kubenswrapper[5106]: I0320 00:10:16.916948 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:16 crc kubenswrapper[5106]: E0320 00:10:16.917353 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:10:17 crc kubenswrapper[5106]: E0320 00:10:17.305760 5106 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.010362 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.011644 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.011688 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.011700 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.011814 5106 kubelet_node_status.go:78] "Attempting to register node" node="crc" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.020407 5106 kubelet_node_status.go:127] "Node was previously registered" node="crc" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.020835 5106 kubelet_node_status.go:81] "Successfully registered node" node="crc" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.020876 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.023158 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.023203 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.023221 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.023248 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.023266 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:18Z","lastTransitionTime":"2026-03-20T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.045126 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.048624 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.048646 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.048657 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.048671 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.048682 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:18Z","lastTransitionTime":"2026-03-20T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.061112 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.066066 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.066096 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.066105 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.066119 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.066128 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:18Z","lastTransitionTime":"2026-03-20T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.079419 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.094070 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.094110 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.094119 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.094132 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.094143 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:18Z","lastTransitionTime":"2026-03-20T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.105701 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.105815 5106 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.105840 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.206601 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.307327 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.407705 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.508685 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.609682 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.710218 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.810484 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.910915 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.921356 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.921817 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.923370 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" exitCode=255 Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.923429 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d"} Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.923476 5106 scope.go:117] "RemoveContainer" containerID="625445a27682332d91a26aa246c041ae3fa4a4d1ea219c5360c197e28bc85cbb" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.923817 5106 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.924591 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.924622 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.924631 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.925040 5106 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Mar 20 00:10:18 crc kubenswrapper[5106]: I0320 00:10:18.925273 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:10:18 crc kubenswrapper[5106]: E0320 00:10:18.925472 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.011355 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.112003 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.212324 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.313017 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.413697 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.514844 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.615380 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.716012 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.816385 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: E0320 00:10:19.917223 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:19 crc kubenswrapper[5106]: I0320 00:10:19.927020 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.017600 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.118039 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.218824 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.319298 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.419948 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.520830 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.621040 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.721973 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.822918 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:20 crc kubenswrapper[5106]: E0320 00:10:20.923052 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.023534 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.124160 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.224867 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.325843 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.426264 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.526739 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.627194 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.728122 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.828538 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:21 crc kubenswrapper[5106]: E0320 00:10:21.929439 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.030288 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.130631 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.231034 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.331889 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.432225 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.532373 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.633500 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.734696 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.834810 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:22 crc kubenswrapper[5106]: E0320 00:10:22.935028 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.035604 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.136354 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.236705 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.337473 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.437949 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.538999 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.639773 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.739883 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.840479 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:23 crc kubenswrapper[5106]: E0320 00:10:23.940795 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.040937 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.141160 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.242466 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.344021 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.444467 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.545193 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.645812 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.746021 5106 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.766027 5106 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.841308 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.847042 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.847717 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.848933 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.849161 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.849339 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.849493 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.849698 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:24Z","lastTransitionTime":"2026-03-20T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.856919 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.857148 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.857389 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.864228 5106 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.864508 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.873473 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.904112 5106 apiserver.go:52] "Watching apiserver" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.909838 5106 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.910318 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xtksh","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-additional-cni-plugins-wwnpd","openshift-multus/network-metrics-daemon-5qf4l","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-dns/node-resolver-zqbrj","openshift-image-registry/node-ca-kq4bp","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-ovn-kubernetes/ovnkube-node-qvw6r","openshift-etcd/etcd-crc","openshift-machine-config-operator/machine-config-daemon-769dn","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc"] Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.911625 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.912210 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.912279 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.912625 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.912701 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.912990 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.913939 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.914314 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.917350 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.918870 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.921836 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.922553 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.922947 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.923334 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.923449 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.923429 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.927745 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.927899 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.933437 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.934172 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.934314 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.943989 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.945731 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.947287 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.947716 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.947813 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.947992 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.948316 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.949876 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.951268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.951403 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.951706 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.951905 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.952054 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.952057 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:24Z","lastTransitionTime":"2026-03-20T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.953521 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.953614 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.953931 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.957225 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xtksh" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.957635 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.958711 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.958727 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.962982 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.964363 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.964851 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.965134 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.965340 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.965381 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.965411 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.965638 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.967929 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.971038 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.971039 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.971158 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.971242 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.971321 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.974033 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.974085 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.974390 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.977123 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.977387 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.977781 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.979682 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.979996 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.980571 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.982955 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.983134 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.983777 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.984143 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.985662 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.990210 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.990411 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.990534 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.990694 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.990826 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjs84\" (UniqueName: \"kubernetes.io/projected/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-kube-api-access-jjs84\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.990936 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991054 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxgmg\" (UniqueName: \"kubernetes.io/projected/65fc70aa-db07-47cd-b307-36ca79bc3366-kube-api-access-sxgmg\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991182 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991307 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991468 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991554 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991598 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-system-cni-dir\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991625 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991649 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991675 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991732 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-cnibin\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991756 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991792 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991827 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991849 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-os-release\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991870 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-cni-binary-copy\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991892 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991938 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.991961 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.992234 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.992319 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:25.492291962 +0000 UTC m=+79.926026036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.992328 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:24 crc kubenswrapper[5106]: E0320 00:10:24.992378 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:25.492364064 +0000 UTC m=+79.926098118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.993127 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.993177 5106 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.993772 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.994852 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:24 crc kubenswrapper[5106]: I0320 00:10:24.995117 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.007394 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.008000 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.010932 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.015977 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.016019 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.016040 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.016131 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:25.516105235 +0000 UTC m=+79.949839309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.018023 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.018051 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.018063 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.018104 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:25.518091006 +0000 UTC m=+79.951825080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.020829 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.021823 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.027705 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.032671 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.043004 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.043340 5106 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.054191 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.054812 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.054868 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.054895 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.054910 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.054919 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.064452 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.075076 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.075374 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.085359 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.092853 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.092899 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093026 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093087 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093114 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093155 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093211 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093278 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093319 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093343 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093369 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093395 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093450 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093473 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093784 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093811 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093484 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093837 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093743 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093859 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093889 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.093913 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094007 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094036 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094058 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094371 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094397 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094416 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094498 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094748 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094801 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094870 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.094939 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095056 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095073 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095418 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095597 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095670 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095717 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095740 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095766 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095791 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095815 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095836 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095861 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095887 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095909 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095930 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095953 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095974 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.095996 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096019 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096044 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096068 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096088 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096100 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096111 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096138 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096162 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096178 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096188 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096210 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096232 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096255 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096277 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096302 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096323 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096347 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096369 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096391 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096613 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096667 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096719 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096759 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096832 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.096898 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097069 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097135 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097200 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097263 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097262 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097398 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097440 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097466 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097492 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097516 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097540 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097565 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097610 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097635 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097657 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097683 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097713 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097735 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097759 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097780 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098049 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098073 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098187 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098220 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098248 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098273 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098298 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098320 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098346 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098368 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098392 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098415 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098441 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099529 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099571 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099700 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099740 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099768 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099826 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097409 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097492 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097571 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.097761 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098161 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098196 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098293 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098723 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.098724 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099040 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099058 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099203 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099234 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099181 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099272 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099428 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099427 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099450 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.099777 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100309 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100338 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100429 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100552 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100570 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100678 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100709 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102346 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102384 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102419 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102450 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102476 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102502 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102528 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102551 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102594 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102621 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102645 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102670 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102695 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102719 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102740 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102763 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102786 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102810 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103002 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103031 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103053 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103164 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103194 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103218 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103244 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103268 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103296 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103350 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103379 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103402 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103427 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103453 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103504 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103528 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105280 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100716 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100918 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.100922 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101165 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101334 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101388 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101415 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101686 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101830 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101841 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101745 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102086 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102290 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102617 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105901 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102665 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102800 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102805 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102916 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.102994 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103093 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103125 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103147 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103261 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103327 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.101949 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103434 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103498 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103723 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103726 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103882 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103857 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.103905 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.104248 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.104804 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.104868 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.104925 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105015 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105069 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105151 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105505 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105377 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105531 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105687 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.105793 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106121 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106129 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106262 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106280 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106354 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106484 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106494 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106444 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106767 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106820 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.106979 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107274 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107328 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107552 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107559 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107557 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107596 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107614 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107629 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107660 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107688 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107711 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107732 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107752 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107771 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107796 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107815 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107839 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107861 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107880 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107900 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107921 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107943 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107964 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107986 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108006 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108027 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108049 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108067 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108086 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108107 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108131 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108484 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108552 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108569 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108616 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108633 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108648 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108663 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108679 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108697 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108713 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108735 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108813 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108840 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108862 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108891 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108910 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108926 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108941 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108962 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108986 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109005 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109027 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109048 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109067 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109089 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109107 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109128 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109148 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109173 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109190 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109209 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109224 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109247 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109315 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109343 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109370 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107708 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.107990 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108096 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108096 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108109 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108307 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108333 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108392 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108459 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108647 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108662 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108759 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108823 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.108980 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109125 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109246 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109408 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109564 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109680 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.109749 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110214 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110464 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110342 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110674 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110732 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110915 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110923 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.110988 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111084 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111143 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111176 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111202 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111230 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111253 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111278 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111299 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111321 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111347 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111370 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111392 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111416 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111402 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111497 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111516 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111612 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.111669 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112058 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112073 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112417 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112452 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112485 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112516 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112697 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.112714 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113039 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113297 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113324 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113332 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113346 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113369 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113369 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113393 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113419 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113443 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113453 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113464 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113487 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113514 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113539 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113563 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113605 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113629 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113715 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113547 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113892 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.113996 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114027 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114054 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114154 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114183 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114209 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114234 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114258 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114328 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114354 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114588 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114623 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114966 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.115030 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116539 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.115235 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.115492 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.115835 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116553 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116627 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.115860 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.115891 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:10:25.615868072 +0000 UTC m=+80.049602126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.114678 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116753 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116793 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116922 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-bin\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116961 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-script-lib\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116996 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszfl\" (UniqueName: \"kubernetes.io/projected/99795294-4844-44e8-b55b-998323bd4f6e-kube-api-access-rszfl\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117034 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-system-cni-dir\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117068 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-cni-bin\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117099 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a6c6201-eadf-497e-921b-e5fcec3ccddb-mcd-auth-proxy-config\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117130 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lh8x\" (UniqueName: \"kubernetes.io/projected/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-kube-api-access-8lh8x\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117186 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-cnibin\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117235 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-config\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117268 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-os-release\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117299 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-system-cni-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117331 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-daemon-config\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117362 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-ovn\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117400 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117431 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-socket-dir-parent\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117453 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24fjj\" (UniqueName: \"kubernetes.io/projected/58f9d176-e017-4ab6-b0ad-7d97c5746baf-kube-api-access-24fjj\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117474 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-netd\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117496 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99795294-4844-44e8-b55b-998323bd4f6e-ovn-node-metrics-cert\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117536 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-k8s-cni-cncf-io\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117558 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-multus-certs\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117626 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-slash\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117660 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jjs84\" (UniqueName: \"kubernetes.io/projected/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-kube-api-access-jjs84\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117688 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117722 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117754 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfrsq\" (UniqueName: \"kubernetes.io/projected/9a6c6201-eadf-497e-921b-e5fcec3ccddb-kube-api-access-qfrsq\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117773 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117838 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-serviceca\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117880 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-cni-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117909 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/58f9d176-e017-4ab6-b0ad-7d97c5746baf-tmp-dir\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117942 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-systemd-units\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117992 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-cnibin\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118025 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-netns\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118057 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118099 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mvlx\" (UniqueName: \"kubernetes.io/projected/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-kube-api-access-7mvlx\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118129 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwttp\" (UniqueName: \"kubernetes.io/projected/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-kube-api-access-zwttp\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118215 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-etc-kubernetes\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118250 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-node-log\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118261 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-cnibin\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118317 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-os-release\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118269 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116926 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116955 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.116456 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117023 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117029 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117113 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117149 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117268 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117455 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117541 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117855 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117857 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117805 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.117902 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.115906 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118519 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-cni-binary-copy\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118566 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-system-cni-dir\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118630 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118813 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118904 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.118889 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-kubelet\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119027 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/65fc70aa-db07-47cd-b307-36ca79bc3366-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119075 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-hostroot\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119764 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-var-lib-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119838 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-log-socket\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119510 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119443 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-cni-binary-copy\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119688 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119886 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119843 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119865 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.119951 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119977 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-conf-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.120022 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:25.620003396 +0000 UTC m=+80.053737550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120039 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9a6c6201-eadf-497e-921b-e5fcec3ccddb-rootfs\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.119537 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/65fc70aa-db07-47cd-b307-36ca79bc3366-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120564 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120619 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120651 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120684 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120103 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-os-release\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120836 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-netns\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120858 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120868 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.120895 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121018 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-systemd\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121036 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121049 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-env-overrides\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121084 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-cni-binary-copy\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121133 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121152 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sxgmg\" (UniqueName: \"kubernetes.io/projected/65fc70aa-db07-47cd-b307-36ca79bc3366-kube-api-access-sxgmg\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121201 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-cni-multus\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121366 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121471 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121456 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-host\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121534 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/58f9d176-e017-4ab6-b0ad-7d97c5746baf-hosts-file\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121686 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a6c6201-eadf-497e-921b-e5fcec3ccddb-proxy-tls\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121714 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121751 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121773 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-kubelet\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121817 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-etc-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.122632 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.123183 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.123325 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.124728 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.121978 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125287 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125411 5106 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125431 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125449 5106 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125460 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125472 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125485 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125497 5106 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125508 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125519 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125532 5106 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125544 5106 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125555 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125566 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125592 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125603 5106 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125614 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125625 5106 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125635 5106 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125646 5106 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125657 5106 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125669 5106 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125681 5106 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125694 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125706 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125718 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125730 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125742 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125755 5106 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125773 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125785 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125797 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125808 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125819 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125831 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125843 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125855 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125866 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125879 5106 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125893 5106 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125905 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125916 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125928 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125939 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125952 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125963 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125973 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125985 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.125996 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126007 5106 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126018 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126031 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126042 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126053 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126066 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126077 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126087 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126098 5106 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126110 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126121 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126133 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126238 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126293 5106 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126314 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126325 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126336 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126348 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126360 5106 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126405 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126454 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126468 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126485 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126502 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126560 5106 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126641 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126659 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126695 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126700 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126736 5106 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126750 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126763 5106 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126777 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126790 5106 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126802 5106 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126815 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126826 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126839 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126851 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126863 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126656 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126874 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126989 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127003 5106 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127015 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127027 5106 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127038 5106 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126795 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.126815 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127050 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127095 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127108 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127124 5106 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127132 5106 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127139 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127147 5106 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127155 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127162 5106 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127171 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127179 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127187 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127195 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127203 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127211 5106 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127219 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127226 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127235 5106 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127243 5106 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127255 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127262 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127271 5106 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127286 5106 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127295 5106 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127303 5106 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127311 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127321 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127332 5106 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127342 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127354 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127364 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127376 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127386 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127397 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127407 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127338 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127416 5106 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127470 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127484 5106 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127495 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127508 5106 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127507 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127526 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127518 5106 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127552 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127560 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127568 5106 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127591 5106 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127599 5106 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127607 5106 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127616 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127624 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127631 5106 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127640 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127648 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127656 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127663 5106 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127672 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127683 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127691 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127700 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127708 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127717 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127725 5106 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127733 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127741 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127749 5106 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127757 5106 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127765 5106 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127773 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127781 5106 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127789 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127796 5106 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127805 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127813 5106 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127821 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127829 5106 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127837 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127846 5106 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127854 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127862 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127870 5106 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127878 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127886 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127894 5106 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127901 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127908 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127916 5106 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127925 5106 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127932 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127942 5106 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127951 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127959 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127967 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127975 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127983 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127990 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.127999 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128007 5106 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128015 5106 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128023 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128031 5106 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128038 5106 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128046 5106 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128054 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128062 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128071 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128080 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128090 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128097 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128104 5106 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128112 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128120 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128127 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128135 5106 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128144 5106 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128152 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.128217 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.129669 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.132782 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.133035 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.133545 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.134010 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.136105 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.139569 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjs84\" (UniqueName: \"kubernetes.io/projected/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-kube-api-access-jjs84\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.140562 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxgmg\" (UniqueName: \"kubernetes.io/projected/65fc70aa-db07-47cd-b307-36ca79bc3366-kube-api-access-sxgmg\") pod \"multus-additional-cni-plugins-wwnpd\" (UID: \"65fc70aa-db07-47cd-b307-36ca79bc3366\") " pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.146826 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.147206 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.147464 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.158140 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.158186 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.158199 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.158239 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.158255 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.160425 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.160724 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.161968 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.168612 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.168971 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.169688 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.171904 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.173765 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.176940 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.182039 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.183295 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.184495 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.185083 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.186185 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.187786 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.189893 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.191695 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.192623 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.194409 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.195146 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.196164 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.197092 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.197736 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.199634 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.201258 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.201646 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.203442 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.204752 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.207527 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.208915 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.210001 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.211666 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.212552 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.213794 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.215110 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.215806 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.218430 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.218938 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.220939 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.221767 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.225114 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.227512 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228839 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-etc-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228866 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228884 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-bin\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228925 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-script-lib\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228941 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rszfl\" (UniqueName: \"kubernetes.io/projected/99795294-4844-44e8-b55b-998323bd4f6e-kube-api-access-rszfl\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228937 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-etc-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228937 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.228977 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-bin\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229672 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-script-lib\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229750 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-cni-bin\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229800 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a6c6201-eadf-497e-921b-e5fcec3ccddb-mcd-auth-proxy-config\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229828 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8lh8x\" (UniqueName: \"kubernetes.io/projected/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-kube-api-access-8lh8x\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229860 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-config\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229880 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-cni-bin\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229887 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-system-cni-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229923 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-daemon-config\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229940 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-system-cni-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229942 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-ovn\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229972 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-socket-dir-parent\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.229975 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-ovn\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230001 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24fjj\" (UniqueName: \"kubernetes.io/projected/58f9d176-e017-4ab6-b0ad-7d97c5746baf-kube-api-access-24fjj\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230025 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-netd\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230045 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99795294-4844-44e8-b55b-998323bd4f6e-ovn-node-metrics-cert\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230078 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-k8s-cni-cncf-io\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230097 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-multus-certs\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230113 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-socket-dir-parent\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230118 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-slash\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230188 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qfrsq\" (UniqueName: \"kubernetes.io/projected/9a6c6201-eadf-497e-921b-e5fcec3ccddb-kube-api-access-qfrsq\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230211 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-serviceca\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230233 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-cni-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230362 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/58f9d176-e017-4ab6-b0ad-7d97c5746baf-tmp-dir\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230391 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-systemd-units\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230420 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-cnibin\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230437 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-netns\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230459 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230484 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mvlx\" (UniqueName: \"kubernetes.io/projected/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-kube-api-access-7mvlx\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230501 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zwttp\" (UniqueName: \"kubernetes.io/projected/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-kube-api-access-zwttp\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230519 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-etc-kubernetes\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230535 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-node-log\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230555 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230597 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-kubelet\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230620 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-hostroot\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230641 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-var-lib-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230658 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-log-socket\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230685 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-conf-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230705 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9a6c6201-eadf-497e-921b-e5fcec3ccddb-rootfs\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230735 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-os-release\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230752 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-netns\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230773 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230794 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230812 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-systemd\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230814 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-config\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230830 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-env-overrides\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230853 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230851 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-cni-binary-copy\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230888 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-cni-multus\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230903 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-host\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230934 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/58f9d176-e017-4ab6-b0ad-7d97c5746baf-hosts-file\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230954 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a6c6201-eadf-497e-921b-e5fcec3ccddb-proxy-tls\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230969 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.230984 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-kubelet\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231036 5106 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231047 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231057 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231065 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231074 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231085 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231094 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231103 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231112 5106 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231124 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231136 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231148 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231160 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231173 5106 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231188 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.233086 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231234 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-log-socket\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231831 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-cni-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231911 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-k8s-cni-cncf-io\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232164 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/58f9d176-e017-4ab6-b0ad-7d97c5746baf-tmp-dir\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232212 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-systemd-units\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232246 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-cnibin\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232271 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232294 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-cni-binary-copy\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232302 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-etc-kubernetes\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232337 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-hostroot\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232350 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/58f9d176-e017-4ab6-b0ad-7d97c5746baf-hosts-file\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232401 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-node-log\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232426 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-conf-dir\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232439 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-kubelet\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232448 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9a6c6201-eadf-497e-921b-e5fcec3ccddb-rootfs\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232466 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-slash\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232486 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-systemd\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232488 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-var-lib-openvswitch\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232491 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-multus-certs\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232541 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-netd\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232648 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-var-lib-cni-multus\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232677 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-host\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232840 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-os-release\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232865 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-host-run-netns\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232893 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-kubelet\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.232910 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-multus-daemon-config\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.231667 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-netns\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.233149 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.233535 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.233636 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.233693 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-serviceca\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.233821 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-env-overrides\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.236278 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9a6c6201-eadf-497e-921b-e5fcec3ccddb-mcd-auth-proxy-config\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.236769 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.238050 5106 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.238240 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.238841 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99795294-4844-44e8-b55b-998323bd4f6e-ovn-node-metrics-cert\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.239202 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9a6c6201-eadf-497e-921b-e5fcec3ccddb-proxy-tls\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.241346 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.244941 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.245733 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszfl\" (UniqueName: \"kubernetes.io/projected/99795294-4844-44e8-b55b-998323bd4f6e-kube-api-access-rszfl\") pod \"ovnkube-node-qvw6r\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.246241 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lh8x\" (UniqueName: \"kubernetes.io/projected/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-kube-api-access-8lh8x\") pod \"ovnkube-control-plane-57b78d8988-trcsc\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.246422 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwttp\" (UniqueName: \"kubernetes.io/projected/0e025495-7d3d-4ff6-a3af-a6d3c459cc74-kube-api-access-zwttp\") pod \"node-ca-kq4bp\" (UID: \"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\") " pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.247454 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.246953 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mvlx\" (UniqueName: \"kubernetes.io/projected/9da3e0a0-f6ab-4f57-925e-c59772b3d6d9-kube-api-access-7mvlx\") pod \"multus-xtksh\" (UID: \"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\") " pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.248524 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.253345 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24fjj\" (UniqueName: \"kubernetes.io/projected/58f9d176-e017-4ab6-b0ad-7d97c5746baf-kube-api-access-24fjj\") pod \"node-resolver-zqbrj\" (UID: \"58f9d176-e017-4ab6-b0ad-7d97c5746baf\") " pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.253890 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfrsq\" (UniqueName: \"kubernetes.io/projected/9a6c6201-eadf-497e-921b-e5fcec3ccddb-kube-api-access-qfrsq\") pod \"machine-config-daemon-769dn\" (UID: \"9a6c6201-eadf-497e-921b-e5fcec3ccddb\") " pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.254877 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.256140 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.257181 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.259890 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.259932 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.259956 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.259968 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.259977 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.261235 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.262001 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.262773 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.263681 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.264450 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: source /etc/kubernetes/apiserver-url.env Mar 20 00:10:25 crc kubenswrapper[5106]: else Mar 20 00:10:25 crc kubenswrapper[5106]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 00:10:25 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.264963 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.267120 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.268289 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.270736 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.271472 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.273659 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.274891 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-63f4536c10edd6045ddccc2305a070a7de2b91c9663c3134ef1f50c7dfb4e6bc WatchSource:0}: Error finding container 63f4536c10edd6045ddccc2305a070a7de2b91c9663c3134ef1f50c7dfb4e6bc: Status 404 returned error can't find the container with id 63f4536c10edd6045ddccc2305a070a7de2b91c9663c3134ef1f50c7dfb4e6bc Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.276216 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.276980 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.277289 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:25 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 00:10:25 crc kubenswrapper[5106]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 00:10:25 crc kubenswrapper[5106]: ho_enable="--enable-hybrid-overlay" Mar 20 00:10:25 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 00:10:25 crc kubenswrapper[5106]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 00:10:25 crc kubenswrapper[5106]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --webhook-host=127.0.0.1 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --webhook-port=9743 \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ho_enable} \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-interconnect \ Mar 20 00:10:25 crc kubenswrapper[5106]: --disable-approver \ Mar 20 00:10:25 crc kubenswrapper[5106]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --wait-for-kubernetes-api=200s \ Mar 20 00:10:25 crc kubenswrapper[5106]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --loglevel="${LOGLEVEL}" Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.278224 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.279765 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:25 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --disable-webhook \ Mar 20 00:10:25 crc kubenswrapper[5106]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --loglevel="${LOGLEVEL}" Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.281019 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.281451 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.282256 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.283006 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.283852 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.285120 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.297244 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-88934f4ae493152b0030aec25bc7c91272ac068c3607b0f4240a47a34100ad0a WatchSource:0}: Error finding container 88934f4ae493152b0030aec25bc7c91272ac068c3607b0f4240a47a34100ad0a: Status 404 returned error can't find the container with id 88934f4ae493152b0030aec25bc7c91272ac068c3607b0f4240a47a34100ad0a Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.298479 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.301100 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.302323 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.307560 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65fc70aa_db07_47cd_b307_36ca79bc3366.slice/crio-698c9706c07cd3860dc8b4cf704f6e9331e2754f35b68fdc20f6e6d915464801 WatchSource:0}: Error finding container 698c9706c07cd3860dc8b4cf704f6e9331e2754f35b68fdc20f6e6d915464801: Status 404 returned error can't find the container with id 698c9706c07cd3860dc8b4cf704f6e9331e2754f35b68fdc20f6e6d915464801 Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.310014 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxgmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wwnpd_openshift-multus(65fc70aa-db07-47cd-b307-36ca79bc3366): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.311322 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" podUID="65fc70aa-db07-47cd-b307-36ca79bc3366" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.344941 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zqbrj" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.355162 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xtksh" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.360642 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58f9d176_e017_4ab6_b0ad_7d97c5746baf.slice/crio-23939e62abc478622e0ad93e77e087fc4b1dce370212aa280caddea7bd2bb6cc WatchSource:0}: Error finding container 23939e62abc478622e0ad93e77e087fc4b1dce370212aa280caddea7bd2bb6cc: Status 404 returned error can't find the container with id 23939e62abc478622e0ad93e77e087fc4b1dce370212aa280caddea7bd2bb6cc Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.362111 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.362153 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.362163 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.362179 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.362189 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.362461 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.364712 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:25 crc kubenswrapper[5106]: set -uo pipefail Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 00:10:25 crc kubenswrapper[5106]: HOSTS_FILE="/etc/hosts" Mar 20 00:10:25 crc kubenswrapper[5106]: TEMP_FILE="/tmp/hosts.tmp" Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Make a temporary file with the old hosts file's attributes. Mar 20 00:10:25 crc kubenswrapper[5106]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 00:10:25 crc kubenswrapper[5106]: echo "Failed to preserve hosts file. Exiting." Mar 20 00:10:25 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: while true; do Mar 20 00:10:25 crc kubenswrapper[5106]: declare -A svc_ips Mar 20 00:10:25 crc kubenswrapper[5106]: for svc in "${services[@]}"; do Mar 20 00:10:25 crc kubenswrapper[5106]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 00:10:25 crc kubenswrapper[5106]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 00:10:25 crc kubenswrapper[5106]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 00:10:25 crc kubenswrapper[5106]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 00:10:25 crc kubenswrapper[5106]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:25 crc kubenswrapper[5106]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:25 crc kubenswrapper[5106]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:25 crc kubenswrapper[5106]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 00:10:25 crc kubenswrapper[5106]: for i in ${!cmds[*]} Mar 20 00:10:25 crc kubenswrapper[5106]: do Mar 20 00:10:25 crc kubenswrapper[5106]: ips=($(eval "${cmds[i]}")) Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: svc_ips["${svc}"]="${ips[@]}" Mar 20 00:10:25 crc kubenswrapper[5106]: break Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Update /etc/hosts only if we get valid service IPs Mar 20 00:10:25 crc kubenswrapper[5106]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 00:10:25 crc kubenswrapper[5106]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 00:10:25 crc kubenswrapper[5106]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 00:10:25 crc kubenswrapper[5106]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:25 crc kubenswrapper[5106]: continue Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Append resolver entries for services Mar 20 00:10:25 crc kubenswrapper[5106]: rc=0 Mar 20 00:10:25 crc kubenswrapper[5106]: for svc in "${!svc_ips[@]}"; do Mar 20 00:10:25 crc kubenswrapper[5106]: for ip in ${svc_ips[${svc}]}; do Mar 20 00:10:25 crc kubenswrapper[5106]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ $rc -ne 0 ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:25 crc kubenswrapper[5106]: continue Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 00:10:25 crc kubenswrapper[5106]: # Replace /etc/hosts with our modified version if needed Mar 20 00:10:25 crc kubenswrapper[5106]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 00:10:25 crc kubenswrapper[5106]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:25 crc kubenswrapper[5106]: unset svc_ips Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24fjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-zqbrj_openshift-dns(58f9d176-e017-4ab6-b0ad-7d97c5746baf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.365900 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-zqbrj" podUID="58f9d176-e017-4ab6-b0ad-7d97c5746baf" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.372226 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.372640 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da3e0a0_f6ab_4f57_925e_c59772b3d6d9.slice/crio-551c0958dfa9274aff75463f79a6f85482719583d76dc893045c04980ea6d99f WatchSource:0}: Error finding container 551c0958dfa9274aff75463f79a6f85482719583d76dc893045c04980ea6d99f: Status 404 returned error can't find the container with id 551c0958dfa9274aff75463f79a6f85482719583d76dc893045c04980ea6d99f Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.376566 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 00:10:25 crc kubenswrapper[5106]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 00:10:25 crc kubenswrapper[5106]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mvlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xtksh_openshift-multus(9da3e0a0-f6ab-4f57-925e-c59772b3d6d9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.378119 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xtksh" podUID="9da3e0a0-f6ab-4f57-925e-c59772b3d6d9" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.378611 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99795294_4844_44e8_b55b_998323bd4f6e.slice/crio-d8549a9c58ad3977289558b5d259040ba8382a65f326eae10975f7b4a2222951 WatchSource:0}: Error finding container d8549a9c58ad3977289558b5d259040ba8382a65f326eae10975f7b4a2222951: Status 404 returned error can't find the container with id d8549a9c58ad3977289558b5d259040ba8382a65f326eae10975f7b4a2222951 Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.379859 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.380338 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 00:10:25 crc kubenswrapper[5106]: apiVersion: v1 Mar 20 00:10:25 crc kubenswrapper[5106]: clusters: Mar 20 00:10:25 crc kubenswrapper[5106]: - cluster: Mar 20 00:10:25 crc kubenswrapper[5106]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 00:10:25 crc kubenswrapper[5106]: server: https://api-int.crc.testing:6443 Mar 20 00:10:25 crc kubenswrapper[5106]: name: default-cluster Mar 20 00:10:25 crc kubenswrapper[5106]: contexts: Mar 20 00:10:25 crc kubenswrapper[5106]: - context: Mar 20 00:10:25 crc kubenswrapper[5106]: cluster: default-cluster Mar 20 00:10:25 crc kubenswrapper[5106]: namespace: default Mar 20 00:10:25 crc kubenswrapper[5106]: user: default-auth Mar 20 00:10:25 crc kubenswrapper[5106]: name: default-context Mar 20 00:10:25 crc kubenswrapper[5106]: current-context: default-context Mar 20 00:10:25 crc kubenswrapper[5106]: kind: Config Mar 20 00:10:25 crc kubenswrapper[5106]: preferences: {} Mar 20 00:10:25 crc kubenswrapper[5106]: users: Mar 20 00:10:25 crc kubenswrapper[5106]: - name: default-auth Mar 20 00:10:25 crc kubenswrapper[5106]: user: Mar 20 00:10:25 crc kubenswrapper[5106]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 00:10:25 crc kubenswrapper[5106]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 00:10:25 crc kubenswrapper[5106]: EOF Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rszfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qvw6r_openshift-ovn-kubernetes(99795294-4844-44e8-b55b-998323bd4f6e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.383006 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.386516 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kq4bp" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.386836 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-769dn_openshift-machine-config-operator(9a6c6201-eadf-497e-921b-e5fcec3ccddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.388836 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-769dn_openshift-machine-config-operator(9a6c6201-eadf-497e-921b-e5fcec3ccddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.390018 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.395303 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60b4e0cb_0c7a_4a61_8c4c_2075e7bf2ebe.slice/crio-4ee47e85fe34a9bc0d0f079f2b85da175c22f3c46bd033232e940b716040e386 WatchSource:0}: Error finding container 4ee47e85fe34a9bc0d0f079f2b85da175c22f3c46bd033232e940b716040e386: Status 404 returned error can't find the container with id 4ee47e85fe34a9bc0d0f079f2b85da175c22f3c46bd033232e940b716040e386 Mar 20 00:10:25 crc kubenswrapper[5106]: W0320 00:10:25.397515 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e025495_7d3d_4ff6_a3af_a6d3c459cc74.slice/crio-6ba9c410dea90c7a1b57e049cf051f042d3d9d495c2e171e780f803a21f214ff WatchSource:0}: Error finding container 6ba9c410dea90c7a1b57e049cf051f042d3d9d495c2e171e780f803a21f214ff: Status 404 returned error can't find the container with id 6ba9c410dea90c7a1b57e049cf051f042d3d9d495c2e171e780f803a21f214ff Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.398739 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:25 crc kubenswrapper[5106]: set -euo pipefail Mar 20 00:10:25 crc kubenswrapper[5106]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 20 00:10:25 crc kubenswrapper[5106]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 20 00:10:25 crc kubenswrapper[5106]: # As the secret mount is optional we must wait for the files to be present. Mar 20 00:10:25 crc kubenswrapper[5106]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 20 00:10:25 crc kubenswrapper[5106]: TS=$(date +%s) Mar 20 00:10:25 crc kubenswrapper[5106]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 20 00:10:25 crc kubenswrapper[5106]: HAS_LOGGED_INFO=0 Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: log_missing_certs(){ Mar 20 00:10:25 crc kubenswrapper[5106]: CUR_TS=$(date +%s) Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 20 00:10:25 crc kubenswrapper[5106]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 20 00:10:25 crc kubenswrapper[5106]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 20 00:10:25 crc kubenswrapper[5106]: HAS_LOGGED_INFO=1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: } Mar 20 00:10:25 crc kubenswrapper[5106]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 20 00:10:25 crc kubenswrapper[5106]: log_missing_certs Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 5 Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/kube-rbac-proxy \ Mar 20 00:10:25 crc kubenswrapper[5106]: --logtostderr \ Mar 20 00:10:25 crc kubenswrapper[5106]: --secure-listen-address=:9108 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --upstream=http://127.0.0.1:29108/ \ Mar 20 00:10:25 crc kubenswrapper[5106]: --tls-private-key-file=${TLS_PK} \ Mar 20 00:10:25 crc kubenswrapper[5106]: --tls-cert-file=${TLS_CERT} Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-trcsc_openshift-ovn-kubernetes(60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.400295 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 00:10:25 crc kubenswrapper[5106]: while [ true ]; Mar 20 00:10:25 crc kubenswrapper[5106]: do Mar 20 00:10:25 crc kubenswrapper[5106]: for f in $(ls /tmp/serviceca); do Mar 20 00:10:25 crc kubenswrapper[5106]: echo $f Mar 20 00:10:25 crc kubenswrapper[5106]: ca_file_path="/tmp/serviceca/${f}" Mar 20 00:10:25 crc kubenswrapper[5106]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 00:10:25 crc kubenswrapper[5106]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 00:10:25 crc kubenswrapper[5106]: if [ -e "${reg_dir_path}" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 00:10:25 crc kubenswrapper[5106]: else Mar 20 00:10:25 crc kubenswrapper[5106]: mkdir $reg_dir_path Mar 20 00:10:25 crc kubenswrapper[5106]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: for d in $(ls /etc/docker/certs.d); do Mar 20 00:10:25 crc kubenswrapper[5106]: echo $d Mar 20 00:10:25 crc kubenswrapper[5106]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 00:10:25 crc kubenswrapper[5106]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 00:10:25 crc kubenswrapper[5106]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: rm -rf /etc/docker/certs.d/$d Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait ${!} Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwttp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-kq4bp_openshift-image-registry(0e025495-7d3d-4ff6-a3af-a6d3c459cc74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.401246 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:25 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_join_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_join_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_transit_switch_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_transit_switch_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: dns_name_resolver_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # This is needed so that converting clusters from GA to TP Mar 20 00:10:25 crc kubenswrapper[5106]: # will rollout control plane pods as well Mar 20 00:10:25 crc kubenswrapper[5106]: network_segmentation_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_enabled_flag="--enable-multi-network" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" != "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_enabled_flag="--enable-multi-network" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: route_advertisements_enable_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: route_advertisements_enable_flag="--enable-route-advertisements" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: preconfigured_udn_addresses_enable_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Enable multi-network policy if configured (control-plane always full mode) Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_policy_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Enable admin network policy if configured (control-plane always full mode) Mar 20 00:10:25 crc kubenswrapper[5106]: admin_network_policy_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: if [ "shared" == "shared" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: gateway_mode_flags="--gateway-mode shared" Mar 20 00:10:25 crc kubenswrapper[5106]: elif [ "shared" == "local" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: gateway_mode_flags="--gateway-mode local" Mar 20 00:10:25 crc kubenswrapper[5106]: else Mar 20 00:10:25 crc kubenswrapper[5106]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Mar 20 00:10:25 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/ovnkube \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-interconnect \ Mar 20 00:10:25 crc kubenswrapper[5106]: --init-cluster-manager "${K8S_NODE}" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 20 00:10:25 crc kubenswrapper[5106]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --metrics-bind-address "127.0.0.1:29108" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --metrics-enable-pprof \ Mar 20 00:10:25 crc kubenswrapper[5106]: --metrics-enable-config-duration \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v4_join_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v6_join_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${dns_name_resolver_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${persistent_ips_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${multi_network_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${network_segmentation_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${gateway_mode_flags} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${route_advertisements_enable_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${preconfigured_udn_addresses_enable_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-ip=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-firewall=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-qos=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-service=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-multicast \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-multi-external-gateway=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${multi_network_policy_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${admin_network_policy_enabled_flag} Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-trcsc_openshift-ovn-kubernetes(60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.401369 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-kq4bp" podUID="0e025495-7d3d-4ff6-a3af-a6d3c459cc74" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.403233 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.463979 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.464030 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.464042 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.464057 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.464071 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.537207 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.537251 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.537268 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.537302 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537397 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537448 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:26.537434477 +0000 UTC m=+80.971168521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537481 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537520 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537531 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537614 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:26.537569041 +0000 UTC m=+80.971303095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537651 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.537675 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:26.537666363 +0000 UTC m=+80.971400417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.538270 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.538287 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.538314 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.538347 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:26.53833826 +0000 UTC m=+80.972072314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.566282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.566531 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.566654 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.566836 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.566876 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.638410 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.638528 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.638638 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.638663 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:10:26.6386247 +0000 UTC m=+81.072358794 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.638723 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:26.638701762 +0000 UTC m=+81.072435856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.669001 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.669247 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.669323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.669405 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.669484 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.771805 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.771850 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.771861 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.771875 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.771887 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.875344 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.875720 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.875773 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.875808 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.875836 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.942377 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"84076bf67fe0a52532e618930c5949b9a50c5ac356e3d9eaa2e63c3f1a755612"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.943227 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zqbrj" event={"ID":"58f9d176-e017-4ab6-b0ad-7d97c5746baf","Type":"ContainerStarted","Data":"23939e62abc478622e0ad93e77e087fc4b1dce370212aa280caddea7bd2bb6cc"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.944800 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-769dn_openshift-machine-config-operator(9a6c6201-eadf-497e-921b-e5fcec3ccddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.944988 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:25 crc kubenswrapper[5106]: set -uo pipefail Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 00:10:25 crc kubenswrapper[5106]: HOSTS_FILE="/etc/hosts" Mar 20 00:10:25 crc kubenswrapper[5106]: TEMP_FILE="/tmp/hosts.tmp" Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Make a temporary file with the old hosts file's attributes. Mar 20 00:10:25 crc kubenswrapper[5106]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 00:10:25 crc kubenswrapper[5106]: echo "Failed to preserve hosts file. Exiting." Mar 20 00:10:25 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: while true; do Mar 20 00:10:25 crc kubenswrapper[5106]: declare -A svc_ips Mar 20 00:10:25 crc kubenswrapper[5106]: for svc in "${services[@]}"; do Mar 20 00:10:25 crc kubenswrapper[5106]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 00:10:25 crc kubenswrapper[5106]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 00:10:25 crc kubenswrapper[5106]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 00:10:25 crc kubenswrapper[5106]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 00:10:25 crc kubenswrapper[5106]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:25 crc kubenswrapper[5106]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:25 crc kubenswrapper[5106]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:25 crc kubenswrapper[5106]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 00:10:25 crc kubenswrapper[5106]: for i in ${!cmds[*]} Mar 20 00:10:25 crc kubenswrapper[5106]: do Mar 20 00:10:25 crc kubenswrapper[5106]: ips=($(eval "${cmds[i]}")) Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: svc_ips["${svc}"]="${ips[@]}" Mar 20 00:10:25 crc kubenswrapper[5106]: break Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Update /etc/hosts only if we get valid service IPs Mar 20 00:10:25 crc kubenswrapper[5106]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 00:10:25 crc kubenswrapper[5106]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 00:10:25 crc kubenswrapper[5106]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 00:10:25 crc kubenswrapper[5106]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:25 crc kubenswrapper[5106]: continue Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Append resolver entries for services Mar 20 00:10:25 crc kubenswrapper[5106]: rc=0 Mar 20 00:10:25 crc kubenswrapper[5106]: for svc in "${!svc_ips[@]}"; do Mar 20 00:10:25 crc kubenswrapper[5106]: for ip in ${svc_ips[${svc}]}; do Mar 20 00:10:25 crc kubenswrapper[5106]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ $rc -ne 0 ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:25 crc kubenswrapper[5106]: continue Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 00:10:25 crc kubenswrapper[5106]: # Replace /etc/hosts with our modified version if needed Mar 20 00:10:25 crc kubenswrapper[5106]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 00:10:25 crc kubenswrapper[5106]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:25 crc kubenswrapper[5106]: unset svc_ips Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24fjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-zqbrj_openshift-dns(58f9d176-e017-4ab6-b0ad-7d97c5746baf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.945308 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" event={"ID":"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe","Type":"ContainerStarted","Data":"4ee47e85fe34a9bc0d0f079f2b85da175c22f3c46bd033232e940b716040e386"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.946056 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kq4bp" event={"ID":"0e025495-7d3d-4ff6-a3af-a6d3c459cc74","Type":"ContainerStarted","Data":"6ba9c410dea90c7a1b57e049cf051f042d3d9d495c2e171e780f803a21f214ff"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.946098 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-zqbrj" podUID="58f9d176-e017-4ab6-b0ad-7d97c5746baf" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.947330 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-769dn_openshift-machine-config-operator(9a6c6201-eadf-497e-921b-e5fcec3ccddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.947799 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:25 crc kubenswrapper[5106]: set -euo pipefail Mar 20 00:10:25 crc kubenswrapper[5106]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 20 00:10:25 crc kubenswrapper[5106]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 20 00:10:25 crc kubenswrapper[5106]: # As the secret mount is optional we must wait for the files to be present. Mar 20 00:10:25 crc kubenswrapper[5106]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 20 00:10:25 crc kubenswrapper[5106]: TS=$(date +%s) Mar 20 00:10:25 crc kubenswrapper[5106]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 20 00:10:25 crc kubenswrapper[5106]: HAS_LOGGED_INFO=0 Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: log_missing_certs(){ Mar 20 00:10:25 crc kubenswrapper[5106]: CUR_TS=$(date +%s) Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 20 00:10:25 crc kubenswrapper[5106]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 20 00:10:25 crc kubenswrapper[5106]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 20 00:10:25 crc kubenswrapper[5106]: HAS_LOGGED_INFO=1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: } Mar 20 00:10:25 crc kubenswrapper[5106]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 20 00:10:25 crc kubenswrapper[5106]: log_missing_certs Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 5 Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/kube-rbac-proxy \ Mar 20 00:10:25 crc kubenswrapper[5106]: --logtostderr \ Mar 20 00:10:25 crc kubenswrapper[5106]: --secure-listen-address=:9108 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --upstream=http://127.0.0.1:29108/ \ Mar 20 00:10:25 crc kubenswrapper[5106]: --tls-private-key-file=${TLS_PK} \ Mar 20 00:10:25 crc kubenswrapper[5106]: --tls-cert-file=${TLS_CERT} Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-trcsc_openshift-ovn-kubernetes(60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.947882 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 00:10:25 crc kubenswrapper[5106]: while [ true ]; Mar 20 00:10:25 crc kubenswrapper[5106]: do Mar 20 00:10:25 crc kubenswrapper[5106]: for f in $(ls /tmp/serviceca); do Mar 20 00:10:25 crc kubenswrapper[5106]: echo $f Mar 20 00:10:25 crc kubenswrapper[5106]: ca_file_path="/tmp/serviceca/${f}" Mar 20 00:10:25 crc kubenswrapper[5106]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 00:10:25 crc kubenswrapper[5106]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 00:10:25 crc kubenswrapper[5106]: if [ -e "${reg_dir_path}" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 00:10:25 crc kubenswrapper[5106]: else Mar 20 00:10:25 crc kubenswrapper[5106]: mkdir $reg_dir_path Mar 20 00:10:25 crc kubenswrapper[5106]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: for d in $(ls /etc/docker/certs.d); do Mar 20 00:10:25 crc kubenswrapper[5106]: echo $d Mar 20 00:10:25 crc kubenswrapper[5106]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 00:10:25 crc kubenswrapper[5106]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 00:10:25 crc kubenswrapper[5106]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: rm -rf /etc/docker/certs.d/$d Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: sleep 60 & wait ${!} Mar 20 00:10:25 crc kubenswrapper[5106]: done Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwttp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-kq4bp_openshift-image-registry(0e025495-7d3d-4ff6-a3af-a6d3c459cc74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.948722 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.949040 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-kq4bp" podUID="0e025495-7d3d-4ff6-a3af-a6d3c459cc74" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.950518 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"d8549a9c58ad3977289558b5d259040ba8382a65f326eae10975f7b4a2222951"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.952123 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 00:10:25 crc kubenswrapper[5106]: apiVersion: v1 Mar 20 00:10:25 crc kubenswrapper[5106]: clusters: Mar 20 00:10:25 crc kubenswrapper[5106]: - cluster: Mar 20 00:10:25 crc kubenswrapper[5106]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 00:10:25 crc kubenswrapper[5106]: server: https://api-int.crc.testing:6443 Mar 20 00:10:25 crc kubenswrapper[5106]: name: default-cluster Mar 20 00:10:25 crc kubenswrapper[5106]: contexts: Mar 20 00:10:25 crc kubenswrapper[5106]: - context: Mar 20 00:10:25 crc kubenswrapper[5106]: cluster: default-cluster Mar 20 00:10:25 crc kubenswrapper[5106]: namespace: default Mar 20 00:10:25 crc kubenswrapper[5106]: user: default-auth Mar 20 00:10:25 crc kubenswrapper[5106]: name: default-context Mar 20 00:10:25 crc kubenswrapper[5106]: current-context: default-context Mar 20 00:10:25 crc kubenswrapper[5106]: kind: Config Mar 20 00:10:25 crc kubenswrapper[5106]: preferences: {} Mar 20 00:10:25 crc kubenswrapper[5106]: users: Mar 20 00:10:25 crc kubenswrapper[5106]: - name: default-auth Mar 20 00:10:25 crc kubenswrapper[5106]: user: Mar 20 00:10:25 crc kubenswrapper[5106]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 00:10:25 crc kubenswrapper[5106]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 00:10:25 crc kubenswrapper[5106]: EOF Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rszfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qvw6r_openshift-ovn-kubernetes(99795294-4844-44e8-b55b-998323bd4f6e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.953018 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xtksh" event={"ID":"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9","Type":"ContainerStarted","Data":"551c0958dfa9274aff75463f79a6f85482719583d76dc893045c04980ea6d99f"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.953223 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.953658 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:25 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_join_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_join_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_transit_switch_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_transit_switch_subnet_opt= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: dns_name_resolver_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # This is needed so that converting clusters from GA to TP Mar 20 00:10:25 crc kubenswrapper[5106]: # will rollout control plane pods as well Mar 20 00:10:25 crc kubenswrapper[5106]: network_segmentation_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_enabled_flag="--enable-multi-network" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" != "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_enabled_flag="--enable-multi-network" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: route_advertisements_enable_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: route_advertisements_enable_flag="--enable-route-advertisements" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: preconfigured_udn_addresses_enable_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Enable multi-network policy if configured (control-plane always full mode) Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_policy_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: # Enable admin network policy if configured (control-plane always full mode) Mar 20 00:10:25 crc kubenswrapper[5106]: admin_network_policy_enabled_flag= Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: if [ "shared" == "shared" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: gateway_mode_flags="--gateway-mode shared" Mar 20 00:10:25 crc kubenswrapper[5106]: elif [ "shared" == "local" ]; then Mar 20 00:10:25 crc kubenswrapper[5106]: gateway_mode_flags="--gateway-mode local" Mar 20 00:10:25 crc kubenswrapper[5106]: else Mar 20 00:10:25 crc kubenswrapper[5106]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Mar 20 00:10:25 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/ovnkube \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-interconnect \ Mar 20 00:10:25 crc kubenswrapper[5106]: --init-cluster-manager "${K8S_NODE}" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 20 00:10:25 crc kubenswrapper[5106]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --metrics-bind-address "127.0.0.1:29108" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --metrics-enable-pprof \ Mar 20 00:10:25 crc kubenswrapper[5106]: --metrics-enable-config-duration \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v4_join_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v6_join_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${dns_name_resolver_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${persistent_ips_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${multi_network_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${network_segmentation_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${gateway_mode_flags} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${route_advertisements_enable_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${preconfigured_udn_addresses_enable_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-ip=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-firewall=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-qos=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-egress-service=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-multicast \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-multi-external-gateway=true \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${multi_network_policy_enabled_flag} \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${admin_network_policy_enabled_flag} Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-trcsc_openshift-ovn-kubernetes(60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.954092 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"37f3f50f71d8b64c313372bd02502fe773fe3ffbbf51f79ff43441a6ae26d9a3"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.954596 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 00:10:25 crc kubenswrapper[5106]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 00:10:25 crc kubenswrapper[5106]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mvlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xtksh_openshift-multus(9da3e0a0-f6ab-4f57-925e-c59772b3d6d9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.955098 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.955675 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xtksh" podUID="9da3e0a0-f6ab-4f57-925e-c59772b3d6d9" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.955867 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: source /etc/kubernetes/apiserver-url.env Mar 20 00:10:25 crc kubenswrapper[5106]: else Mar 20 00:10:25 crc kubenswrapper[5106]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 00:10:25 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.956530 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerStarted","Data":"698c9706c07cd3860dc8b4cf704f6e9331e2754f35b68fdc20f6e6d915464801"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.956990 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.958046 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxgmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wwnpd_openshift-multus(65fc70aa-db07-47cd-b307-36ca79bc3366): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.958467 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"63f4536c10edd6045ddccc2305a070a7de2b91c9663c3134ef1f50c7dfb4e6bc"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.959143 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.959347 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" podUID="65fc70aa-db07-47cd-b307-36ca79bc3366" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.959864 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:25 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 00:10:25 crc kubenswrapper[5106]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 00:10:25 crc kubenswrapper[5106]: ho_enable="--enable-hybrid-overlay" Mar 20 00:10:25 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 00:10:25 crc kubenswrapper[5106]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 00:10:25 crc kubenswrapper[5106]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --webhook-host=127.0.0.1 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --webhook-port=9743 \ Mar 20 00:10:25 crc kubenswrapper[5106]: ${ho_enable} \ Mar 20 00:10:25 crc kubenswrapper[5106]: --enable-interconnect \ Mar 20 00:10:25 crc kubenswrapper[5106]: --disable-approver \ Mar 20 00:10:25 crc kubenswrapper[5106]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --wait-for-kubernetes-api=200s \ Mar 20 00:10:25 crc kubenswrapper[5106]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --loglevel="${LOGLEVEL}" Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.960096 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"88934f4ae493152b0030aec25bc7c91272ac068c3607b0f4240a47a34100ad0a"} Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.961705 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:25 crc kubenswrapper[5106]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:25 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:25 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:25 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:25 crc kubenswrapper[5106]: fi Mar 20 00:10:25 crc kubenswrapper[5106]: Mar 20 00:10:25 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 00:10:25 crc kubenswrapper[5106]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 00:10:25 crc kubenswrapper[5106]: --disable-webhook \ Mar 20 00:10:25 crc kubenswrapper[5106]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 00:10:25 crc kubenswrapper[5106]: --loglevel="${LOGLEVEL}" Mar 20 00:10:25 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:25 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.961857 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.962892 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Mar 20 00:10:25 crc kubenswrapper[5106]: E0320 00:10:25.962965 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.968846 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.977910 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.977962 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.977978 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.977999 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.978012 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:25Z","lastTransitionTime":"2026-03-20T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.985072 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:25 crc kubenswrapper[5106]: I0320 00:10:25.995904 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.005186 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.012863 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.020909 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.036046 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.045326 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.053833 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.064047 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.072941 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.080329 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.080375 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.080388 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.080406 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.080417 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.081889 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.089750 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.101442 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.113735 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.127120 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.140861 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.152743 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.160380 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.160441 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.160828 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.161033 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.172759 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.182247 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.182389 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.182405 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.182422 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.182446 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.183060 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.192099 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.200320 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.207309 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.216384 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.224561 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.231936 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.240029 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.247260 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.255479 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.261941 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.285171 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.285216 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.285226 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.285243 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.285254 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.291439 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.332331 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.369790 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.387531 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.387568 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.387593 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.387607 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.387618 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.411925 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.452595 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.490640 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.490686 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.490699 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.490717 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.490733 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.498140 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.529602 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.548968 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.549118 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.549233 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.549356 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549180 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549511 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549547 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549710 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549249 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549877 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549939 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549291 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.549652 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:28.549627758 +0000 UTC m=+82.983361822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.550125 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:28.55010597 +0000 UTC m=+82.983840044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.550143 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:28.550134871 +0000 UTC m=+82.983868935 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.550157 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:28.550150091 +0000 UTC m=+82.983884155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.593457 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.593509 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.593535 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.593558 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.593570 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.650472 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.650699 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:10:28.650667886 +0000 UTC m=+83.084401940 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.650957 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.651102 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.651150 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:28.651141588 +0000 UTC m=+83.084875642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.695613 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.695677 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.695695 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.695724 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.695743 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.797616 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.797694 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.797716 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.797743 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.797763 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.899366 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.899664 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.899812 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.899905 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.899989 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:26Z","lastTransitionTime":"2026-03-20T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.916931 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:10:26 crc kubenswrapper[5106]: I0320 00:10:26.918121 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:10:26 crc kubenswrapper[5106]: E0320 00:10:26.918398 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.002377 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.002454 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.002485 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.002515 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.002540 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.105333 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.105393 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.105406 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.105429 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.105438 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.160544 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:27 crc kubenswrapper[5106]: E0320 00:10:27.160684 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.160764 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:27 crc kubenswrapper[5106]: E0320 00:10:27.160916 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.170256 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.183147 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.193073 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.203886 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.207632 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.207865 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.207923 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.208005 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.208087 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.213975 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.222534 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.231052 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.242666 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.255063 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.267718 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.275427 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.285205 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.296457 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.310600 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.310824 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.310884 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.310943 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.311264 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.317527 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.327824 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.344654 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.354454 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.363170 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.371455 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.413985 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.414032 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.414044 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.414061 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.414072 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.516542 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.516628 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.516645 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.516666 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.516684 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.619393 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.619442 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.619455 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.619522 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.619540 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.722203 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.722317 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.722330 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.722347 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.722358 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.824663 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.824708 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.824720 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.824737 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.824749 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.930215 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.930304 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.930331 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.930369 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:27 crc kubenswrapper[5106]: I0320 00:10:27.930397 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:27Z","lastTransitionTime":"2026-03-20T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.033290 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.033348 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.033362 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.033383 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.033397 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.136632 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.137027 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.137215 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.137359 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.137565 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.160351 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.160609 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.160369 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.161151 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.240432 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.240517 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.240538 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.240561 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.240599 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.294367 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.294437 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.294456 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.294481 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.294506 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.312153 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.317420 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.317674 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.317763 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.317850 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.317956 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.334597 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.338902 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.338954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.338972 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.338994 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.339007 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.354794 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.360192 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.360563 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.360690 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.360774 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.360854 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.375449 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.381892 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.381999 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.382018 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.382043 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.382059 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.393714 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.394419 5106 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.395963 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.396007 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.396018 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.396033 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.396044 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.498787 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.499069 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.499146 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.499237 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.499306 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.573568 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.573669 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.573716 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.573769 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.573942 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.573963 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.573978 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574037 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:32.574019709 +0000 UTC m=+87.007753763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574458 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574462 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574567 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574608 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574618 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574503 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:32.574492371 +0000 UTC m=+87.008226425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574770 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:32.574709096 +0000 UTC m=+87.008443150 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.574811 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:32.574801169 +0000 UTC m=+87.008535243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.602519 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.602568 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.602601 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.602619 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.602634 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.674973 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.675141 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:10:32.675116299 +0000 UTC m=+87.108850383 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.675289 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.675421 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: E0320 00:10:28.675472 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:32.675459437 +0000 UTC m=+87.109193491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.706418 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.706465 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.706478 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.706496 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.706507 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.809848 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.809915 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.809946 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.809994 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.810017 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.911955 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.911991 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.912000 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.912012 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:28 crc kubenswrapper[5106]: I0320 00:10:28.912021 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:28Z","lastTransitionTime":"2026-03-20T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.015074 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.015130 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.015148 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.015176 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.015195 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.118103 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.118149 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.118163 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.118179 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.118191 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.160915 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.160989 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:29 crc kubenswrapper[5106]: E0320 00:10:29.161340 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:29 crc kubenswrapper[5106]: E0320 00:10:29.161503 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.222051 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.222150 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.222177 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.222219 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.222244 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.266670 5106 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.325753 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.325821 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.325837 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.325862 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.325877 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.429080 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.429132 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.429146 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.429166 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.429179 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.532928 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.532981 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.532992 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.533015 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.533029 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.636207 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.636270 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.636284 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.636303 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.636314 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.739815 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.740054 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.740130 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.740195 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.740258 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.844038 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.844117 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.844141 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.844173 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.844196 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.946640 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.946729 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.946747 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.946774 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:29 crc kubenswrapper[5106]: I0320 00:10:29.946789 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:29Z","lastTransitionTime":"2026-03-20T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.048768 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.048872 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.048887 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.048911 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.048926 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.151030 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.151079 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.151091 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.151106 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.151117 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.160496 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.160500 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:30 crc kubenswrapper[5106]: E0320 00:10:30.160649 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:30 crc kubenswrapper[5106]: E0320 00:10:30.160796 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.253253 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.253289 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.253299 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.253312 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.253322 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.355495 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.355637 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.355667 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.355698 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.355722 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.458212 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.458270 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.458288 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.458314 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.458326 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.560390 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.560467 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.560492 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.560516 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.560538 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.662985 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.663058 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.663078 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.663101 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.663117 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.765305 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.765389 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.765409 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.765434 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.765456 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.867826 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.867871 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.867883 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.867901 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.867918 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.970672 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.970737 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.970765 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.970795 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:30 crc kubenswrapper[5106]: I0320 00:10:30.970820 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:30Z","lastTransitionTime":"2026-03-20T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.073218 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.073501 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.073625 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.073724 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.073807 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.160394 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.160502 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:31 crc kubenswrapper[5106]: E0320 00:10:31.160826 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:31 crc kubenswrapper[5106]: E0320 00:10:31.161212 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.176872 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.176954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.176979 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.177012 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.177034 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.279374 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.279421 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.279435 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.279452 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.279466 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.382406 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.382442 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.382454 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.382473 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.382484 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.484725 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.484790 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.484809 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.484834 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.484852 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.587013 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.587210 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.587237 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.587261 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.587278 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.690737 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.690806 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.690979 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.691015 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.691041 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.793279 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.793341 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.793360 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.793386 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.793422 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.896077 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.896125 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.896137 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.896154 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.896165 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.998215 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.998268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.998281 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.998299 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:31 crc kubenswrapper[5106]: I0320 00:10:31.998310 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:31Z","lastTransitionTime":"2026-03-20T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.100503 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.100546 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.100554 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.100568 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.100595 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.160705 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.160751 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:32 crc kubenswrapper[5106]: E0320 00:10:32.160867 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:32 crc kubenswrapper[5106]: E0320 00:10:32.161099 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.203226 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.203275 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.203284 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.203299 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.203309 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.306124 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.306211 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.306233 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.306263 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.306285 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.409137 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.409214 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.409232 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.409251 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.409264 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.512184 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.512233 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.512243 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.512256 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.512267 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.614391 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.614430 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.614440 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.614453 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.614462 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.717225 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.717311 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.717339 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.717376 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.717403 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.820699 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.820793 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.820823 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.820858 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.820885 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.924195 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.924252 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.924264 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.924286 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:32 crc kubenswrapper[5106]: I0320 00:10:32.924302 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:32Z","lastTransitionTime":"2026-03-20T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.027456 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.027538 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.027561 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.027623 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.027644 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.130371 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.130441 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.130460 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.130487 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.130508 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.233757 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.233866 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.233890 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.233922 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.233943 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.337057 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.337127 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.337146 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.337172 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.337197 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434640 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434712 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434788 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434832 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434881 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434914 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.434936 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435035 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435118 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:41.43509718 +0000 UTC m=+95.868831254 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.434826 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435252 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435263 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:10:41.435241783 +0000 UTC m=+95.868975887 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435273 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435299 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435338 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435355 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435370 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435344 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:41.435332666 +0000 UTC m=+95.869066780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435446 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435446 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:41.435414948 +0000 UTC m=+95.869149082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435480 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:41.435469399 +0000 UTC m=+95.869203543 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.435612 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435683 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.435779 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:33 crc kubenswrapper[5106]: E0320 00:10:33.436041 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:41.435985432 +0000 UTC m=+95.869719526 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.441651 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.441731 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.441757 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.441790 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.441817 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.546311 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.546403 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.546429 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.546472 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.546496 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.650090 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.650173 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.650196 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.650230 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.650257 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.756133 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.756229 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.756251 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.756284 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.756307 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.859163 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.859221 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.859236 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.859253 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.859264 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.962508 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.962565 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.962621 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.962675 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:33 crc kubenswrapper[5106]: I0320 00:10:33.962698 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:33Z","lastTransitionTime":"2026-03-20T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.065285 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.065331 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.065342 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.065356 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.065367 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.160622 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:34 crc kubenswrapper[5106]: E0320 00:10:34.160804 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.160632 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:34 crc kubenswrapper[5106]: E0320 00:10:34.161718 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.168057 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.168095 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.168104 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.168120 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.168130 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.270134 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.270182 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.270195 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.270213 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.270226 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.373244 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.373323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.373338 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.373353 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.373365 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.475196 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.475252 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.475263 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.475276 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.475287 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.577550 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.577609 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.577826 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.577870 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.577883 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.680415 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.680449 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.680462 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.680475 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.680485 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.782930 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.782986 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.782998 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.783018 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.783032 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.885400 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.885461 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.885476 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.885494 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.885507 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.987520 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.987567 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.987607 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.987627 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:34 crc kubenswrapper[5106]: I0320 00:10:34.987641 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:34Z","lastTransitionTime":"2026-03-20T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.091321 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.091361 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.091372 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.091388 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.091400 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.160885 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:35 crc kubenswrapper[5106]: E0320 00:10:35.161043 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.161780 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:35 crc kubenswrapper[5106]: E0320 00:10:35.162128 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.193242 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.193285 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.193295 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.193309 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.193321 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.295891 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.295942 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.295955 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.295973 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.295985 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.398084 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.398150 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.398169 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.398194 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.398213 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.501143 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.501204 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.501222 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.501245 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.501261 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.604090 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.604172 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.604197 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.604223 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.604249 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.706512 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.706597 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.706614 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.706637 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.706656 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.809709 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.809786 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.809809 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.809835 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.809856 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.911915 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.911998 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.912017 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.912038 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:35 crc kubenswrapper[5106]: I0320 00:10:35.912054 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:35Z","lastTransitionTime":"2026-03-20T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.014544 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.014610 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.014623 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.014639 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.014652 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.117307 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.117357 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.117371 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.117388 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.117400 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.160512 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.160557 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:36 crc kubenswrapper[5106]: E0320 00:10:36.160684 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:36 crc kubenswrapper[5106]: E0320 00:10:36.160850 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.219466 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.219532 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.219548 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.219571 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.219613 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.322061 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.322116 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.322159 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.322186 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.322200 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.424202 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.424241 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.424251 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.424263 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.424272 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.526220 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.526268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.526283 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.526301 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.526312 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.628774 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.628818 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.628830 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.628847 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.628859 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.730752 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.730795 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.730808 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.730825 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.730837 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.833146 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.833194 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.833206 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.833221 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.833232 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.935213 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.935254 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.935267 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.935281 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:36 crc kubenswrapper[5106]: I0320 00:10:36.935291 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:36Z","lastTransitionTime":"2026-03-20T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.038024 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.038080 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.038093 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.038113 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.038126 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.140270 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.140321 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.140333 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.140351 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.140363 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.160827 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.161280 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:37 crc kubenswrapper[5106]: E0320 00:10:37.161485 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:37 crc kubenswrapper[5106]: E0320 00:10:37.161626 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:37 crc kubenswrapper[5106]: E0320 00:10:37.163562 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxgmg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wwnpd_openshift-multus(65fc70aa-db07-47cd-b307-36ca79bc3366): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:37 crc kubenswrapper[5106]: E0320 00:10:37.164656 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:37 crc kubenswrapper[5106]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:37 crc kubenswrapper[5106]: set -uo pipefail Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 00:10:37 crc kubenswrapper[5106]: HOSTS_FILE="/etc/hosts" Mar 20 00:10:37 crc kubenswrapper[5106]: TEMP_FILE="/tmp/hosts.tmp" Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: # Make a temporary file with the old hosts file's attributes. Mar 20 00:10:37 crc kubenswrapper[5106]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 00:10:37 crc kubenswrapper[5106]: echo "Failed to preserve hosts file. Exiting." Mar 20 00:10:37 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:37 crc kubenswrapper[5106]: fi Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: while true; do Mar 20 00:10:37 crc kubenswrapper[5106]: declare -A svc_ips Mar 20 00:10:37 crc kubenswrapper[5106]: for svc in "${services[@]}"; do Mar 20 00:10:37 crc kubenswrapper[5106]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 00:10:37 crc kubenswrapper[5106]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 00:10:37 crc kubenswrapper[5106]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 00:10:37 crc kubenswrapper[5106]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 00:10:37 crc kubenswrapper[5106]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:37 crc kubenswrapper[5106]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:37 crc kubenswrapper[5106]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 00:10:37 crc kubenswrapper[5106]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 00:10:37 crc kubenswrapper[5106]: for i in ${!cmds[*]} Mar 20 00:10:37 crc kubenswrapper[5106]: do Mar 20 00:10:37 crc kubenswrapper[5106]: ips=($(eval "${cmds[i]}")) Mar 20 00:10:37 crc kubenswrapper[5106]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 00:10:37 crc kubenswrapper[5106]: svc_ips["${svc}"]="${ips[@]}" Mar 20 00:10:37 crc kubenswrapper[5106]: break Mar 20 00:10:37 crc kubenswrapper[5106]: fi Mar 20 00:10:37 crc kubenswrapper[5106]: done Mar 20 00:10:37 crc kubenswrapper[5106]: done Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: # Update /etc/hosts only if we get valid service IPs Mar 20 00:10:37 crc kubenswrapper[5106]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 00:10:37 crc kubenswrapper[5106]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 00:10:37 crc kubenswrapper[5106]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 00:10:37 crc kubenswrapper[5106]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 00:10:37 crc kubenswrapper[5106]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 00:10:37 crc kubenswrapper[5106]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 00:10:37 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:37 crc kubenswrapper[5106]: continue Mar 20 00:10:37 crc kubenswrapper[5106]: fi Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: # Append resolver entries for services Mar 20 00:10:37 crc kubenswrapper[5106]: rc=0 Mar 20 00:10:37 crc kubenswrapper[5106]: for svc in "${!svc_ips[@]}"; do Mar 20 00:10:37 crc kubenswrapper[5106]: for ip in ${svc_ips[${svc}]}; do Mar 20 00:10:37 crc kubenswrapper[5106]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 00:10:37 crc kubenswrapper[5106]: done Mar 20 00:10:37 crc kubenswrapper[5106]: done Mar 20 00:10:37 crc kubenswrapper[5106]: if [[ $rc -ne 0 ]]; then Mar 20 00:10:37 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:37 crc kubenswrapper[5106]: continue Mar 20 00:10:37 crc kubenswrapper[5106]: fi Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: Mar 20 00:10:37 crc kubenswrapper[5106]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 00:10:37 crc kubenswrapper[5106]: # Replace /etc/hosts with our modified version if needed Mar 20 00:10:37 crc kubenswrapper[5106]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 00:10:37 crc kubenswrapper[5106]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 00:10:37 crc kubenswrapper[5106]: fi Mar 20 00:10:37 crc kubenswrapper[5106]: sleep 60 & wait Mar 20 00:10:37 crc kubenswrapper[5106]: unset svc_ips Mar 20 00:10:37 crc kubenswrapper[5106]: done Mar 20 00:10:37 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24fjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-zqbrj_openshift-dns(58f9d176-e017-4ab6-b0ad-7d97c5746baf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:37 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:37 crc kubenswrapper[5106]: E0320 00:10:37.164717 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" podUID="65fc70aa-db07-47cd-b307-36ca79bc3366" Mar 20 00:10:37 crc kubenswrapper[5106]: E0320 00:10:37.165828 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-zqbrj" podUID="58f9d176-e017-4ab6-b0ad-7d97c5746baf" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.181613 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.193733 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.216619 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.225300 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.243566 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.244662 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.244738 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.244770 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.244787 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.245087 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.256168 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.265221 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.271428 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.278075 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.288953 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.298290 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.308364 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.316314 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.324644 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.334746 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.342135 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.346818 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.346862 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.346876 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.346893 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.346904 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.353359 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.365338 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.371355 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.448517 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.448551 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.448561 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.448592 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.448601 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.550459 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.550518 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.550528 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.550544 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.550554 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.652933 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.652977 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.652988 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.653006 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.653016 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.754832 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.754874 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.754884 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.754897 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.754906 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.856094 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.856128 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.856136 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.856149 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.856158 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.958487 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.958523 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.958532 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.958545 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:37 crc kubenswrapper[5106]: I0320 00:10:37.958555 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:37Z","lastTransitionTime":"2026-03-20T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.060617 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.060655 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.060665 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.060678 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.060690 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.160200 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.160362 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.160620 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.160856 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.162323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.162350 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.162359 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.162372 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.162381 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.162866 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:38 crc kubenswrapper[5106]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 00:10:38 crc kubenswrapper[5106]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 00:10:38 crc kubenswrapper[5106]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mvlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-xtksh_openshift-multus(9da3e0a0-f6ab-4f57-925e-c59772b3d6d9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:38 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.164306 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-xtksh" podUID="9da3e0a0-f6ab-4f57-925e-c59772b3d6d9" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.264358 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.264410 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.264428 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.264450 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.264467 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.366638 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.366908 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.366994 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.367089 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.367189 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.469658 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.469698 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.469710 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.469724 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.469733 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.571142 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.571189 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.571202 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.571219 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.571230 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.598722 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.598761 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.598774 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.598790 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.598803 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.609002 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.611850 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.611881 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.611892 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.611906 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.611917 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.622964 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.626250 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.626288 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.626297 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.626310 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.626318 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.635243 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.638400 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.638476 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.638492 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.638510 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.638522 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.649159 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.652212 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.652274 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.652289 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.652306 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.652320 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.662981 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:38 crc kubenswrapper[5106]: E0320 00:10:38.663102 5106 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.673184 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.673225 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.673235 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.673249 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.673259 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.775025 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.775074 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.775086 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.775104 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.775118 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.877469 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.877511 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.877523 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.877541 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.877553 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.979732 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.979856 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.979875 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.979890 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:38 crc kubenswrapper[5106]: I0320 00:10:38.979923 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:38Z","lastTransitionTime":"2026-03-20T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.082004 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.082183 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.082282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.082381 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.082479 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.160275 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.160275 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.161013 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.162389 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.162604 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:39 crc kubenswrapper[5106]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:39 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:39 crc kubenswrapper[5106]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 00:10:39 crc kubenswrapper[5106]: source /etc/kubernetes/apiserver-url.env Mar 20 00:10:39 crc kubenswrapper[5106]: else Mar 20 00:10:39 crc kubenswrapper[5106]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 00:10:39 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:39 crc kubenswrapper[5106]: fi Mar 20 00:10:39 crc kubenswrapper[5106]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 00:10:39 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:39 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.161555 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.162717 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:39 crc kubenswrapper[5106]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:39 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:39 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:39 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:39 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:39 crc kubenswrapper[5106]: fi Mar 20 00:10:39 crc kubenswrapper[5106]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 00:10:39 crc kubenswrapper[5106]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 00:10:39 crc kubenswrapper[5106]: ho_enable="--enable-hybrid-overlay" Mar 20 00:10:39 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 00:10:39 crc kubenswrapper[5106]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 00:10:39 crc kubenswrapper[5106]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 00:10:39 crc kubenswrapper[5106]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 00:10:39 crc kubenswrapper[5106]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 00:10:39 crc kubenswrapper[5106]: --webhook-host=127.0.0.1 \ Mar 20 00:10:39 crc kubenswrapper[5106]: --webhook-port=9743 \ Mar 20 00:10:39 crc kubenswrapper[5106]: ${ho_enable} \ Mar 20 00:10:39 crc kubenswrapper[5106]: --enable-interconnect \ Mar 20 00:10:39 crc kubenswrapper[5106]: --disable-approver \ Mar 20 00:10:39 crc kubenswrapper[5106]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 00:10:39 crc kubenswrapper[5106]: --wait-for-kubernetes-api=200s \ Mar 20 00:10:39 crc kubenswrapper[5106]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 00:10:39 crc kubenswrapper[5106]: --loglevel="${LOGLEVEL}" Mar 20 00:10:39 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:39 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.163344 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:39 crc kubenswrapper[5106]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 00:10:39 crc kubenswrapper[5106]: while [ true ]; Mar 20 00:10:39 crc kubenswrapper[5106]: do Mar 20 00:10:39 crc kubenswrapper[5106]: for f in $(ls /tmp/serviceca); do Mar 20 00:10:39 crc kubenswrapper[5106]: echo $f Mar 20 00:10:39 crc kubenswrapper[5106]: ca_file_path="/tmp/serviceca/${f}" Mar 20 00:10:39 crc kubenswrapper[5106]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 00:10:39 crc kubenswrapper[5106]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 00:10:39 crc kubenswrapper[5106]: if [ -e "${reg_dir_path}" ]; then Mar 20 00:10:39 crc kubenswrapper[5106]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 00:10:39 crc kubenswrapper[5106]: else Mar 20 00:10:39 crc kubenswrapper[5106]: mkdir $reg_dir_path Mar 20 00:10:39 crc kubenswrapper[5106]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 00:10:39 crc kubenswrapper[5106]: fi Mar 20 00:10:39 crc kubenswrapper[5106]: done Mar 20 00:10:39 crc kubenswrapper[5106]: for d in $(ls /etc/docker/certs.d); do Mar 20 00:10:39 crc kubenswrapper[5106]: echo $d Mar 20 00:10:39 crc kubenswrapper[5106]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 00:10:39 crc kubenswrapper[5106]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 00:10:39 crc kubenswrapper[5106]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 00:10:39 crc kubenswrapper[5106]: rm -rf /etc/docker/certs.d/$d Mar 20 00:10:39 crc kubenswrapper[5106]: fi Mar 20 00:10:39 crc kubenswrapper[5106]: done Mar 20 00:10:39 crc kubenswrapper[5106]: sleep 60 & wait ${!} Mar 20 00:10:39 crc kubenswrapper[5106]: done Mar 20 00:10:39 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwttp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-kq4bp_openshift-image-registry(0e025495-7d3d-4ff6-a3af-a6d3c459cc74): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:39 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.163690 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.163732 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.164512 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-kq4bp" podUID="0e025495-7d3d-4ff6-a3af-a6d3c459cc74" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.164677 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:39 crc kubenswrapper[5106]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:39 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:39 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:39 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:39 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:39 crc kubenswrapper[5106]: fi Mar 20 00:10:39 crc kubenswrapper[5106]: Mar 20 00:10:39 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 00:10:39 crc kubenswrapper[5106]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 00:10:39 crc kubenswrapper[5106]: --disable-webhook \ Mar 20 00:10:39 crc kubenswrapper[5106]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 00:10:39 crc kubenswrapper[5106]: --loglevel="${LOGLEVEL}" Mar 20 00:10:39 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:39 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:39 crc kubenswrapper[5106]: E0320 00:10:39.165760 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.185405 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.185442 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.185451 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.185464 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.185476 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.287542 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.287625 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.287649 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.287674 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.287690 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.389469 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.389517 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.389530 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.389546 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.389557 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.491803 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.491846 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.491855 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.491869 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.491881 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.593751 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.593985 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.594071 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.594205 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.594289 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.697238 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.697301 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.697321 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.697349 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.697372 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.799365 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.799398 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.799410 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.799426 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.799439 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.900787 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.900825 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.900834 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.900849 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:39 crc kubenswrapper[5106]: I0320 00:10:39.900858 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:39Z","lastTransitionTime":"2026-03-20T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.003011 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.003052 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.003063 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.003076 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.003087 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.105491 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.105548 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.105563 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.105597 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.105623 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.160708 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.160938 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.161107 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.161189 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.162501 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:40 crc kubenswrapper[5106]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 00:10:40 crc kubenswrapper[5106]: apiVersion: v1 Mar 20 00:10:40 crc kubenswrapper[5106]: clusters: Mar 20 00:10:40 crc kubenswrapper[5106]: - cluster: Mar 20 00:10:40 crc kubenswrapper[5106]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 00:10:40 crc kubenswrapper[5106]: server: https://api-int.crc.testing:6443 Mar 20 00:10:40 crc kubenswrapper[5106]: name: default-cluster Mar 20 00:10:40 crc kubenswrapper[5106]: contexts: Mar 20 00:10:40 crc kubenswrapper[5106]: - context: Mar 20 00:10:40 crc kubenswrapper[5106]: cluster: default-cluster Mar 20 00:10:40 crc kubenswrapper[5106]: namespace: default Mar 20 00:10:40 crc kubenswrapper[5106]: user: default-auth Mar 20 00:10:40 crc kubenswrapper[5106]: name: default-context Mar 20 00:10:40 crc kubenswrapper[5106]: current-context: default-context Mar 20 00:10:40 crc kubenswrapper[5106]: kind: Config Mar 20 00:10:40 crc kubenswrapper[5106]: preferences: {} Mar 20 00:10:40 crc kubenswrapper[5106]: users: Mar 20 00:10:40 crc kubenswrapper[5106]: - name: default-auth Mar 20 00:10:40 crc kubenswrapper[5106]: user: Mar 20 00:10:40 crc kubenswrapper[5106]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 00:10:40 crc kubenswrapper[5106]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 00:10:40 crc kubenswrapper[5106]: EOF Mar 20 00:10:40 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rszfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-qvw6r_openshift-ovn-kubernetes(99795294-4844-44e8-b55b-998323bd4f6e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:40 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.163330 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-769dn_openshift-machine-config-operator(9a6c6201-eadf-497e-921b-e5fcec3ccddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.164473 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.165533 5106 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfrsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-769dn_openshift-machine-config-operator(9a6c6201-eadf-497e-921b-e5fcec3ccddb): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 00:10:40 crc kubenswrapper[5106]: E0320 00:10:40.166639 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.207355 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.207661 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.207820 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.207951 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.208075 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.309954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.309995 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.310008 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.310025 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.310037 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.412640 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.412740 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.412754 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.412773 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.412786 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.518059 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.518938 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.519064 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.519193 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.519278 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.622506 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.622563 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.622613 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.622635 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.622652 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.724984 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.725023 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.725031 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.725044 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.725052 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.827275 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.827340 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.827359 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.827383 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.827621 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.929988 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.930034 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.930045 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.930062 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:40 crc kubenswrapper[5106]: I0320 00:10:40.930073 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:40Z","lastTransitionTime":"2026-03-20T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.032522 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.032644 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.032674 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.032703 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.032726 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.134152 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.134197 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.134208 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.134225 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.134241 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.159979 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.159979 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.160112 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.160166 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.162379 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:41 crc kubenswrapper[5106]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Mar 20 00:10:41 crc kubenswrapper[5106]: set -euo pipefail Mar 20 00:10:41 crc kubenswrapper[5106]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Mar 20 00:10:41 crc kubenswrapper[5106]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Mar 20 00:10:41 crc kubenswrapper[5106]: # As the secret mount is optional we must wait for the files to be present. Mar 20 00:10:41 crc kubenswrapper[5106]: # The service is created in monitor.yaml and this is created in sdn.yaml. Mar 20 00:10:41 crc kubenswrapper[5106]: TS=$(date +%s) Mar 20 00:10:41 crc kubenswrapper[5106]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Mar 20 00:10:41 crc kubenswrapper[5106]: HAS_LOGGED_INFO=0 Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: log_missing_certs(){ Mar 20 00:10:41 crc kubenswrapper[5106]: CUR_TS=$(date +%s) Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Mar 20 00:10:41 crc kubenswrapper[5106]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Mar 20 00:10:41 crc kubenswrapper[5106]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Mar 20 00:10:41 crc kubenswrapper[5106]: HAS_LOGGED_INFO=1 Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: } Mar 20 00:10:41 crc kubenswrapper[5106]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Mar 20 00:10:41 crc kubenswrapper[5106]: log_missing_certs Mar 20 00:10:41 crc kubenswrapper[5106]: sleep 5 Mar 20 00:10:41 crc kubenswrapper[5106]: done Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Mar 20 00:10:41 crc kubenswrapper[5106]: exec /usr/bin/kube-rbac-proxy \ Mar 20 00:10:41 crc kubenswrapper[5106]: --logtostderr \ Mar 20 00:10:41 crc kubenswrapper[5106]: --secure-listen-address=:9108 \ Mar 20 00:10:41 crc kubenswrapper[5106]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Mar 20 00:10:41 crc kubenswrapper[5106]: --upstream=http://127.0.0.1:29108/ \ Mar 20 00:10:41 crc kubenswrapper[5106]: --tls-private-key-file=${TLS_PK} \ Mar 20 00:10:41 crc kubenswrapper[5106]: --tls-cert-file=${TLS_CERT} Mar 20 00:10:41 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-trcsc_openshift-ovn-kubernetes(60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:41 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.165178 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:10:41 crc kubenswrapper[5106]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ -f "/env/_master" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: set -o allexport Mar 20 00:10:41 crc kubenswrapper[5106]: source "/env/_master" Mar 20 00:10:41 crc kubenswrapper[5106]: set +o allexport Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v4_join_subnet_opt= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v6_join_subnet_opt= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v4_transit_switch_subnet_opt= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v6_transit_switch_subnet_opt= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "" != "" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: dns_name_resolver_enabled_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: persistent_ips_enabled_flag="--enable-persistent-ips" Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: # This is needed so that converting clusters from GA to TP Mar 20 00:10:41 crc kubenswrapper[5106]: # will rollout control plane pods as well Mar 20 00:10:41 crc kubenswrapper[5106]: network_segmentation_enabled_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: multi_network_enabled_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: multi_network_enabled_flag="--enable-multi-network" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "true" != "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: multi_network_enabled_flag="--enable-multi-network" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: network_segmentation_enabled_flag="--enable-network-segmentation" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: route_advertisements_enable_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: route_advertisements_enable_flag="--enable-route-advertisements" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: preconfigured_udn_addresses_enable_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: # Enable multi-network policy if configured (control-plane always full mode) Mar 20 00:10:41 crc kubenswrapper[5106]: multi_network_policy_enabled_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "false" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: # Enable admin network policy if configured (control-plane always full mode) Mar 20 00:10:41 crc kubenswrapper[5106]: admin_network_policy_enabled_flag= Mar 20 00:10:41 crc kubenswrapper[5106]: if [[ "true" == "true" ]]; then Mar 20 00:10:41 crc kubenswrapper[5106]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: if [ "shared" == "shared" ]; then Mar 20 00:10:41 crc kubenswrapper[5106]: gateway_mode_flags="--gateway-mode shared" Mar 20 00:10:41 crc kubenswrapper[5106]: elif [ "shared" == "local" ]; then Mar 20 00:10:41 crc kubenswrapper[5106]: gateway_mode_flags="--gateway-mode local" Mar 20 00:10:41 crc kubenswrapper[5106]: else Mar 20 00:10:41 crc kubenswrapper[5106]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Mar 20 00:10:41 crc kubenswrapper[5106]: exit 1 Mar 20 00:10:41 crc kubenswrapper[5106]: fi Mar 20 00:10:41 crc kubenswrapper[5106]: Mar 20 00:10:41 crc kubenswrapper[5106]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Mar 20 00:10:41 crc kubenswrapper[5106]: exec /usr/bin/ovnkube \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-interconnect \ Mar 20 00:10:41 crc kubenswrapper[5106]: --init-cluster-manager "${K8S_NODE}" \ Mar 20 00:10:41 crc kubenswrapper[5106]: --config-file=/run/ovnkube-config/ovnkube.conf \ Mar 20 00:10:41 crc kubenswrapper[5106]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Mar 20 00:10:41 crc kubenswrapper[5106]: --metrics-bind-address "127.0.0.1:29108" \ Mar 20 00:10:41 crc kubenswrapper[5106]: --metrics-enable-pprof \ Mar 20 00:10:41 crc kubenswrapper[5106]: --metrics-enable-config-duration \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${ovn_v4_join_subnet_opt} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${ovn_v6_join_subnet_opt} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${ovn_v4_transit_switch_subnet_opt} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${ovn_v6_transit_switch_subnet_opt} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${dns_name_resolver_enabled_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${persistent_ips_enabled_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${multi_network_enabled_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${network_segmentation_enabled_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${gateway_mode_flags} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${route_advertisements_enable_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${preconfigured_udn_addresses_enable_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-egress-ip=true \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-egress-firewall=true \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-egress-qos=true \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-egress-service=true \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-multicast \ Mar 20 00:10:41 crc kubenswrapper[5106]: --enable-multi-external-gateway=true \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${multi_network_policy_enabled_flag} \ Mar 20 00:10:41 crc kubenswrapper[5106]: ${admin_network_policy_enabled_flag} Mar 20 00:10:41 crc kubenswrapper[5106]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8lh8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-trcsc_openshift-ovn-kubernetes(60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 00:10:41 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.166394 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.236274 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.236343 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.236359 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.236403 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.236420 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.339127 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.339210 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.339235 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.339265 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.339288 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.441566 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.441652 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.441664 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.441772 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.441797 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.527115 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.527229 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527273 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:10:57.527243126 +0000 UTC m=+111.960977180 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527342 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.527352 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527365 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527375 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.527389 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.527410 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527424 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:57.527410631 +0000 UTC m=+111.961144685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527486 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527523 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:57.527515813 +0000 UTC m=+111.961249867 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527540 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527565 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.527627 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527653 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:57.527633276 +0000 UTC m=+111.961367400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527712 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:57.527701808 +0000 UTC m=+111.961435942 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527756 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527772 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527785 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:41 crc kubenswrapper[5106]: E0320 00:10:41.527827 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:10:57.527815871 +0000 UTC m=+111.961549935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.543785 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.543823 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.543831 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.543844 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.543853 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.645601 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.645640 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.645649 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.645661 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.645670 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.748209 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.748282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.748307 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.748336 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.748360 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.852782 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.852831 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.852845 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.852861 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.852876 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.954986 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.955035 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.955054 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.955075 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:41 crc kubenswrapper[5106]: I0320 00:10:41.955095 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:41Z","lastTransitionTime":"2026-03-20T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.057312 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.057352 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.057361 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.057374 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.057384 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.159226 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.159273 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.159284 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.159299 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.159310 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.160420 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.160420 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:42 crc kubenswrapper[5106]: E0320 00:10:42.160528 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:42 crc kubenswrapper[5106]: E0320 00:10:42.160641 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.161183 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:10:42 crc kubenswrapper[5106]: E0320 00:10:42.161331 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.261028 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.261065 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.261075 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.261087 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.261097 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.362861 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.362907 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.362928 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.362944 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.362955 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.464154 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.464198 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.464208 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.464223 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.464232 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.566059 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.566098 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.566109 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.566122 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.566131 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.668153 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.668193 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.668203 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.668218 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.668228 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.770140 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.770186 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.770197 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.770212 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.770221 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.872327 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.872409 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.872434 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.872463 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.872485 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.974430 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.974466 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.974477 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.974491 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:42 crc kubenswrapper[5106]: I0320 00:10:42.974501 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:42Z","lastTransitionTime":"2026-03-20T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.077040 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.077090 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.077104 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.077121 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.077133 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.161094 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.161138 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:43 crc kubenswrapper[5106]: E0320 00:10:43.161245 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:43 crc kubenswrapper[5106]: E0320 00:10:43.161421 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.178595 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.178632 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.178641 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.178655 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.178664 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.280697 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.280759 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.280778 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.280803 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.280821 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.383253 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.383322 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.383344 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.383371 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.383388 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.485800 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.485839 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.485852 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.485865 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.485875 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.587973 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.588015 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.588024 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.588039 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.588049 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.691447 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.691490 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.691498 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.691511 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.691519 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.793556 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.793609 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.793619 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.793632 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.793641 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.895899 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.895944 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.895954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.895970 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.895982 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.998019 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.998075 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.998084 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.998098 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:43 crc kubenswrapper[5106]: I0320 00:10:43.998108 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:43Z","lastTransitionTime":"2026-03-20T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.099782 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.099822 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.099838 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.099853 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.099862 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.160469 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.160526 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:44 crc kubenswrapper[5106]: E0320 00:10:44.160666 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:44 crc kubenswrapper[5106]: E0320 00:10:44.160744 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.201976 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.202028 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.202041 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.202057 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.202068 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.303833 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.303880 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.303894 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.303911 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.303923 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.406065 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.406493 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.406617 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.406706 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.406793 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.509548 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.509848 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.509942 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.510053 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.510150 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.616502 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.616545 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.616554 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.616568 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.616598 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.631397 5106 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.718841 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.718873 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.718882 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.718895 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.718904 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.821280 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.822025 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.822196 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.822311 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.822409 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.924706 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.924752 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.924764 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.924778 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:44 crc kubenswrapper[5106]: I0320 00:10:44.924787 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:44Z","lastTransitionTime":"2026-03-20T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.027066 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.027097 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.027106 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.027119 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.027127 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.128606 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.128642 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.128652 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.128671 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.128683 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.160410 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.160417 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:45 crc kubenswrapper[5106]: E0320 00:10:45.160547 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:45 crc kubenswrapper[5106]: E0320 00:10:45.160704 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.230916 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.231063 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.231075 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.231088 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.231098 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.333039 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.333409 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.333601 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.333732 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.333833 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.438108 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.438331 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.438434 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.438511 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.438606 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.540746 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.540792 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.540805 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.540821 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.540832 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.643207 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.643254 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.643266 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.643283 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.643294 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.745478 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.745546 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.745564 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.745629 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.745713 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.848127 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.848204 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.848226 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.848252 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.848269 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.950606 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.950643 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.950653 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.950666 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:45 crc kubenswrapper[5106]: I0320 00:10:45.950677 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:45Z","lastTransitionTime":"2026-03-20T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.052309 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.052652 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.052747 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.052839 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.052945 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.155275 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.155512 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.155667 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.155761 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.155840 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.160528 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.160602 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:46 crc kubenswrapper[5106]: E0320 00:10:46.160706 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:46 crc kubenswrapper[5106]: E0320 00:10:46.160778 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.257730 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.257786 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.257798 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.257815 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.257831 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.360671 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.360728 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.360738 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.360753 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.360762 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.464401 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.464469 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.464488 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.464512 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.464538 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.566931 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.567213 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.567282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.567360 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.567504 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.669084 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.669388 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.669501 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.669674 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.669775 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.772002 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.772243 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.772311 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.772381 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.772446 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.875245 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.875299 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.875318 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.875334 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.875345 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.977811 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.977859 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.977871 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.977887 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:46 crc kubenswrapper[5106]: I0320 00:10:46.977899 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:46Z","lastTransitionTime":"2026-03-20T00:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.080557 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.080637 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.080650 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.080666 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.080677 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.160875 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:47 crc kubenswrapper[5106]: E0320 00:10:47.161078 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.161162 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:47 crc kubenswrapper[5106]: E0320 00:10:47.161369 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.180840 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.182712 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.182820 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.182852 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.182968 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.183069 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.191954 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.199701 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.206884 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.218640 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.229452 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.239261 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.247561 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.255422 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.263019 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.272014 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.280135 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.285203 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.285243 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.285253 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.285266 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.285278 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.304509 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.323524 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.335890 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.345885 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.355786 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.371897 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.380354 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.386719 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.386779 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.386791 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.386806 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.386815 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.489097 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.489452 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.489463 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.489478 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.489488 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.591540 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.591616 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.591627 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.591664 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.591685 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.693711 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.693762 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.693771 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.693786 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.693797 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.796107 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.796165 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.796188 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.796216 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.796237 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.898385 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.898423 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.898432 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.898445 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:47 crc kubenswrapper[5106]: I0320 00:10:47.898454 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:47Z","lastTransitionTime":"2026-03-20T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.000484 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.000544 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.000557 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.000571 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.000638 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.102775 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.102818 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.102829 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.102843 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.102853 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.160682 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.160841 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.161028 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.161361 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.206197 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.206261 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.206273 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.206294 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.206320 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.308929 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.308991 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.309004 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.309022 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.309035 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.410287 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.410330 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.410340 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.410359 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.410370 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.476120 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zqbrj" event={"ID":"58f9d176-e017-4ab6-b0ad-7d97c5746baf","Type":"ContainerStarted","Data":"f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.487350 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.498078 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.512645 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.512683 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.512692 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.512705 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.512716 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.513391 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.524665 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.535865 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.545168 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.558290 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.571370 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.583370 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.592221 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.602252 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.615722 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.615773 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.615783 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.615797 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.615807 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.618754 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.641972 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.654648 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.672662 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.683541 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.693418 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.701372 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.709094 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.717634 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.717666 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.717677 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.717689 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.717698 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.798885 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.798971 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.798992 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.799025 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.799044 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.811166 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.814281 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.814338 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.814349 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.814364 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.814375 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.823817 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.828983 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.829028 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.829038 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.829056 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.829067 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.837316 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.840103 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.840137 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.840152 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.840169 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.840180 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.847711 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.850098 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.850128 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.850138 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.850151 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.850161 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.857672 5106 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a9af530b-46e3-4432-bc61-2c5eccf70cd7\\\",\\\"systemUUID\\\":\\\"fdcdcd70-d7d0-45f6-8fe8-c45ef984f286\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:48 crc kubenswrapper[5106]: E0320 00:10:48.857790 5106 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.858842 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.858869 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.858878 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.858890 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.858900 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.960887 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.960932 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.960940 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.960954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:48 crc kubenswrapper[5106]: I0320 00:10:48.960965 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:48Z","lastTransitionTime":"2026-03-20T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.063192 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.063228 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.063239 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.063252 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.063260 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.161032 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.161054 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:49 crc kubenswrapper[5106]: E0320 00:10:49.161232 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:49 crc kubenswrapper[5106]: E0320 00:10:49.161340 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.165262 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.165306 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.165319 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.165335 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.165346 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.267268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.267340 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.267360 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.267380 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.267395 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.370151 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.370227 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.370252 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.370277 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.370301 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.472925 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.472967 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.472977 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.472990 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.473000 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.575493 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.575540 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.575549 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.575562 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.575598 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.678646 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.678693 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.678704 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.678720 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.678729 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.780743 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.780808 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.780827 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.780848 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.780864 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.883372 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.883413 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.883431 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.883450 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.883461 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.985451 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.985504 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.985514 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.985527 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:49 crc kubenswrapper[5106]: I0320 00:10:49.985557 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:49Z","lastTransitionTime":"2026-03-20T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.087445 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.087505 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.087518 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.087533 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.087547 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.160744 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.160760 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:50 crc kubenswrapper[5106]: E0320 00:10:50.160977 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:50 crc kubenswrapper[5106]: E0320 00:10:50.161119 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.189486 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.189528 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.189538 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.189553 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.189564 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.291539 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.291601 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.291613 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.291633 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.291644 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.393711 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.393746 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.393755 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.393770 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.393780 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.495172 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.495209 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.495218 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.495232 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.495242 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.597282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.597333 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.597342 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.597357 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.597367 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.699446 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.699497 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.699506 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.699520 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.699532 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.801956 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.802001 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.802013 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.802027 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.802038 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.904249 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.904299 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.904310 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.904328 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:50 crc kubenswrapper[5106]: I0320 00:10:50.904340 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:50Z","lastTransitionTime":"2026-03-20T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.006164 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.006208 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.006220 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.006239 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.006253 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.108437 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.108489 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.108506 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.108528 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.108543 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.164902 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:51 crc kubenswrapper[5106]: E0320 00:10:51.165031 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.165046 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:51 crc kubenswrapper[5106]: E0320 00:10:51.165422 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.210641 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.210715 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.210739 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.210845 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.210875 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.312733 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.312775 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.312788 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.312805 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.312817 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.415340 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.415386 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.415396 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.415409 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.415419 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.486012 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.498290 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.509501 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.517209 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.517255 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.517266 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.517282 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.517293 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.524372 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.531980 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.548777 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.557598 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.564057 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.570415 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.577798 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.586912 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.595572 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.602451 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.610727 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.617912 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.619078 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.619105 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.619116 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.619131 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.619143 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.626156 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.633590 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.643883 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.653057 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.659567 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.721919 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.721956 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.721967 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.721980 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.721988 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.824404 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.824441 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.824451 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.824463 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.824472 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.926159 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.926195 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.926206 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.926218 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:51 crc kubenswrapper[5106]: I0320 00:10:51.926229 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:51Z","lastTransitionTime":"2026-03-20T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.028342 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.028389 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.028400 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.028422 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.028434 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.130323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.130363 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.130372 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.130386 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.130396 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.160640 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.160640 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:52 crc kubenswrapper[5106]: E0320 00:10:52.160748 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:52 crc kubenswrapper[5106]: E0320 00:10:52.164563 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.232268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.232309 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.232320 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.232337 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.232347 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.334037 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.334148 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.334173 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.334199 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.334216 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.436428 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.436488 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.436510 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.436551 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.436714 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.491711 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1" exitCode=0 Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.491776 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.494117 5106 generic.go:358] "Generic (PLEG): container finished" podID="65fc70aa-db07-47cd-b307-36ca79bc3366" containerID="98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d" exitCode=0 Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.494183 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerDied","Data":"98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.509440 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.524612 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.538836 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.540547 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.540744 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.540767 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.540787 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.540797 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.550468 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.563255 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.576179 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.593308 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.602103 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.612882 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.623595 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.639158 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.642891 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.642917 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.642927 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.642940 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.642949 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.648460 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.667739 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.678920 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.687479 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.695377 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.704919 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.715753 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.724781 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.735674 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.743314 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.746247 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.746283 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.746294 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.746310 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.746321 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.752594 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.760717 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.774825 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.781678 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.799615 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.808778 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.815739 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.823257 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.831383 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.842431 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.849626 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.849665 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.849680 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.849696 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.849706 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.855332 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.862950 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.872324 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.880694 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.891629 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.899492 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.911011 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.952045 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.952095 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.952108 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.952126 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:52 crc kubenswrapper[5106]: I0320 00:10:52.952141 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:52Z","lastTransitionTime":"2026-03-20T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.054837 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.054867 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.054877 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.054889 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.054897 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.157168 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.157221 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.157234 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.157253 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.157270 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.160114 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:53 crc kubenswrapper[5106]: E0320 00:10:53.160222 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.161030 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:53 crc kubenswrapper[5106]: E0320 00:10:53.161095 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.258744 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.258794 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.258806 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.258824 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.258839 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.361129 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.361176 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.361188 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.361206 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.361220 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.463342 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.463394 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.463407 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.463422 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.463434 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.498993 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"f4766981f8e40e8708146759f008b2872a2ab29657bcc7a3acb61209af814cef"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.502332 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"2ddd9af58aa57b0d38a952b32ea235cc71190518291c29253037899f6abe3436"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.502371 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"05fffb60827beb7046e691cc7177ed8b7993dd8d1fd1d950c15861a7134a589f"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.502384 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"6c59b9743060c37ccc6998ad273851bf70a36a19866d8a37f385a982d31a58df"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.502445 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"2b518e312797761600d953f4d2468ed5a689003063f65aac80dfc2d4e3197641"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.504833 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerStarted","Data":"060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.520066 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.528212 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.536896 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.545727 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.555827 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.565310 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.567457 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.567498 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.567512 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.567528 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.567540 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.575273 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.585493 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.593571 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.602815 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.612384 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.624595 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.635519 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.643229 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.651708 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.662154 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.669672 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.669709 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.669721 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.669736 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.669748 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.676955 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.684920 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.700892 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.717960 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.729845 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.739312 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.746461 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.752725 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.761408 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.768571 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.771235 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.771270 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.771280 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.771293 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.771303 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.775481 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.784317 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.795839 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4766981f8e40e8708146759f008b2872a2ab29657bcc7a3acb61209af814cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.805088 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.814918 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.825095 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.834121 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.840659 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.848289 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.856186 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.869315 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.872974 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.873012 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.873024 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.873040 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.873053 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.877234 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.974548 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.974598 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.974609 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.974623 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:53 crc kubenswrapper[5106]: I0320 00:10:53.974632 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:53Z","lastTransitionTime":"2026-03-20T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.075972 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.076023 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.076035 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.076052 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.076064 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.160876 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:54 crc kubenswrapper[5106]: E0320 00:10:54.160986 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.161019 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:54 crc kubenswrapper[5106]: E0320 00:10:54.161209 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.177508 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.177546 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.177558 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.177589 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.177602 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.282028 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.282096 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.282116 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.282138 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.282153 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.384618 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.384677 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.384694 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.384713 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.384727 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.486742 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.486789 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.486800 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.486815 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.486826 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.514815 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xtksh" event={"ID":"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9","Type":"ContainerStarted","Data":"4884a24b5e56e4fa296eff21cdf419b0193f65bffeaf8fcd6a1ad11c289ae430"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.516154 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"56b16f01d4a5ea21981edfae77e49d0f1962ed6d21e5a0e26ad37f9ac42bbe16"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.523072 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"708d1108600e19bf17f8e7d9b46b72aba31e28d1e1d323936485aca50078de2f"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.523107 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"ce2e8842699a47c1b0a130e93b149bbb197305493fe7020fbe58717b8475513b"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.524312 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.525160 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"37037d48633e82a846cec06b51f564d21ade971368eb1a8a5bb29b596c2c5ca2"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.525194 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.526403 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kq4bp" event={"ID":"0e025495-7d3d-4ff6-a3af-a6d3c459cc74","Type":"ContainerStarted","Data":"33a4e13819accc3808df17efd6a1592b1c0f9dc24ff01edb45e0ab9de7706e53"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.533110 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.533135 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"e60c16dea81b002da38f6e74a1183aae0d68d5ec2c0f76342944bc4a73fdae4c"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.536810 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.550709 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.561302 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.571864 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.582023 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.589558 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.589617 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.589627 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.589643 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.589653 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.592021 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://4884a24b5e56e4fa296eff21cdf419b0193f65bffeaf8fcd6a1ad11c289ae430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.607434 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.616076 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.633914 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.645227 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.654045 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.663147 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.670827 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.681447 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.691871 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.691924 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.691934 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.691952 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.691972 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.693070 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.701510 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.709323 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.717141 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4766981f8e40e8708146759f008b2872a2ab29657bcc7a3acb61209af814cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.732415 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99795294-4844-44e8-b55b-998323bd4f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszfl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qvw6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.741723 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a6c6201-eadf-497e-921b-e5fcec3ccddb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://37037d48633e82a846cec06b51f564d21ade971368eb1a8a5bb29b596c2c5ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qfrsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-769dn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.756696 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bfb8ff3-0a09-426e-a55b-fdb55d63f156\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://8eeb239f1f4dc20d7a4eb3082743b15402fcfa64c2d765f055b60febb4bb8159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://e57cd65d5edb028b95e533a8f24e531dd67b433e774ff14db6159c675c881932\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://db54af3009de86e4a8fcc0fdc5233912d638c6cb4df949f46d9d0836472096c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f027531e351171f27f0d234f1340e8fc330fd1beef5df845b3d7c67d8f8cdd5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8428ebe4abc8dd7a8d292a9989c16c6616ff7e40a01779178094a437cb8af76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28fd74e99b638bb9595c7264bfcf347a01b23443a31b30b647469e940b1944c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdbdc6456dcf46e3b053d2e373209ad2a613b01c764e7ceb747eb6113ceaf407\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8740daa3049d5ce88c43397cb910bb129754d1e31df71285080dae6fffe7e34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.765714 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://56b16f01d4a5ea21981edfae77e49d0f1962ed6d21e5a0e26ad37f9ac42bbe16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.772498 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjs84\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5qf4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.778526 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zqbrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9d176-e017-4ab6-b0ad-7d97c5746baf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://f57ca4ed42258fd7b5ebc47a3faa76ead7564350e2efb1768ce3d03f72d37077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-24fjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zqbrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.785491 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b816935-a84b-4aa1-850d-23de45ec2047\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a52ed260d73e5a0b44832957540e259da3d4ef397908acd15ad2b9a53eac5878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7d8f6013073e85e810800f339638debb15b808c3798d33eca523c8ec7e25042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.794177 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.794216 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.794228 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.794243 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.794254 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.795317 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"268db469-5250-4a96-998b-1d4a465bb175\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ac75ee6fde37618d4f3addeaf11a5035d2a16b4a85f84bfb1ce62033bdfaaad7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://35863999352b5c14cf487ffef517f6c5011b42d6b9129d6f9f161ad9b265e356\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a23d38bcc5416cc2ef716b58ecbcf50ca7b47b3b0e633e63c4cb2566c0285d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.803398 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.811012 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.819201 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://708d1108600e19bf17f8e7d9b46b72aba31e28d1e1d323936485aca50078de2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:54Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce2e8842699a47c1b0a130e93b149bbb197305493fe7020fbe58717b8475513b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.826884 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://f4766981f8e40e8708146759f008b2872a2ab29657bcc7a3acb61209af814cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"10m\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.835259 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.860482 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8lh8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-trcsc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.896290 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.896324 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.896334 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.896347 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.896358 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.901509 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f7bf04f-91df-48c2-916a-afe1e635b543\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T00:10:18Z\\\",\\\"message\\\":\\\"envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW0320 00:10:17.305762 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 00:10:17.305884 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0320 00:10:17.306612 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-695311479/tls.crt::/tmp/serving-cert-695311479/tls.key\\\\\\\"\\\\nI0320 00:10:18.010003 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 00:10:18.011836 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 00:10:18.011852 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 00:10:18.011883 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 00:10:18.011894 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 00:10:18.015663 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 00:10:18.015691 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015698 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 00:10:18.015704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 00:10:18.015708 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 00:10:18.015711 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 00:10:18.015715 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0320 00:10:18.015743 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0320 00:10:18.017230 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:16Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.941617 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fc70aa-db07-47cd-b307-36ca79bc3366\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98eff49bb458a51ffc4487102dc29b9929a5366f463ffff54f418c44cc812a3d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:10:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:10:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:52Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxgmg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wwnpd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.978057 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kq4bp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e025495-7d3d-4ff6-a3af-a6d3c459cc74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://33a4e13819accc3808df17efd6a1592b1c0f9dc24ff01edb45e0ab9de7706e53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwttp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kq4bp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.997874 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.997918 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.997928 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.997941 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:54 crc kubenswrapper[5106]: I0320 00:10:54.997951 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:54Z","lastTransitionTime":"2026-03-20T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.020487 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c5fc6b1-3d0e-4319-b15c-341af0218c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:09:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://23b036e80d136c45a9ff13a63408428b22b422d4f4dcf9c1e8973f1ff9c5bb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ea7f9d9918db9d162a898a623186f1b6c094b8b6a48de2e07d8a3b950e1989a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://889ed27676105199f97185f7271b652aed7692b5b50a68e9eb5403ffffe024dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:09:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f71878466bb70c73675fef76f186b49df3bec40bbb1c5f445ebe5ea3917349d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T00:09:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T00:09:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:09:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.061052 5106 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-xtksh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T00:10:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://4884a24b5e56e4fa296eff21cdf419b0193f65bffeaf8fcd6a1ad11c289ae430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T00:10:53Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mvlx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T00:10:24Z\\\"}}\" for pod \"openshift-multus\"/\"multus-xtksh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.100302 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.100341 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.100350 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.100362 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.100371 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.160465 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.160465 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:55 crc kubenswrapper[5106]: E0320 00:10:55.160661 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:55 crc kubenswrapper[5106]: E0320 00:10:55.160741 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.202792 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.202855 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.202870 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.202892 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.202918 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.306127 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.306180 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.306193 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.306210 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.306221 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.408262 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.408311 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.408323 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.408339 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.408350 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.509388 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.509419 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.509427 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.509439 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.509449 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.538149 5106 generic.go:358] "Generic (PLEG): container finished" podID="65fc70aa-db07-47cd-b307-36ca79bc3366" containerID="060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e" exitCode=0 Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.538270 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerDied","Data":"060b2ea1dbbae4b24c417f403a49b4c6a7ef51104b4406627df8e6ec649f814e"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.540251 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" event={"ID":"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe","Type":"ContainerStarted","Data":"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.576463 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=31.576448108 podStartE2EDuration="31.576448108s" podCreationTimestamp="2026-03-20 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.576007607 +0000 UTC m=+110.009741681" watchObservedRunningTime="2026-03-20 00:10:55.576448108 +0000 UTC m=+110.010182162" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.611163 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.611199 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.611208 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.611220 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.611230 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.635162 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zqbrj" podStartSLOduration=85.635141154 podStartE2EDuration="1m25.635141154s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.635098083 +0000 UTC m=+110.068832157" watchObservedRunningTime="2026-03-20 00:10:55.635141154 +0000 UTC m=+110.068875208" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.659042 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.659021029 podStartE2EDuration="30.659021029s" podCreationTimestamp="2026-03-20 00:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.658597288 +0000 UTC m=+110.092331342" watchObservedRunningTime="2026-03-20 00:10:55.659021029 +0000 UTC m=+110.092755083" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.696257 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=31.696243162000002 podStartE2EDuration="31.696243162s" podCreationTimestamp="2026-03-20 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.680895473 +0000 UTC m=+110.114629527" watchObservedRunningTime="2026-03-20 00:10:55.696243162 +0000 UTC m=+110.129977216" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.713390 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.713436 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.713448 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.713463 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.713475 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.812321 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kq4bp" podStartSLOduration=84.81230097 podStartE2EDuration="1m24.81230097s" podCreationTimestamp="2026-03-20 00:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.800649855 +0000 UTC m=+110.234383909" watchObservedRunningTime="2026-03-20 00:10:55.81230097 +0000 UTC m=+110.246035034" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.812559 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=31.812553207 podStartE2EDuration="31.812553207s" podCreationTimestamp="2026-03-20 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.811963472 +0000 UTC m=+110.245697536" watchObservedRunningTime="2026-03-20 00:10:55.812553207 +0000 UTC m=+110.246287261" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.815201 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.815253 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.815268 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.815288 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.815300 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.854687 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xtksh" podStartSLOduration=85.854670883 podStartE2EDuration="1m25.854670883s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.824610042 +0000 UTC m=+110.258344096" watchObservedRunningTime="2026-03-20 00:10:55.854670883 +0000 UTC m=+110.288404937" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.868391 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podStartSLOduration=85.86837302 podStartE2EDuration="1m25.86837302s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:55.867401256 +0000 UTC m=+110.301135320" watchObservedRunningTime="2026-03-20 00:10:55.86837302 +0000 UTC m=+110.302107074" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.917994 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.918033 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.918046 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.918064 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:55 crc kubenswrapper[5106]: I0320 00:10:55.918077 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:55Z","lastTransitionTime":"2026-03-20T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.019479 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.019523 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.019534 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.019550 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.019563 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.121934 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.121969 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.121978 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.121990 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.121998 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.160251 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.160295 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:56 crc kubenswrapper[5106]: E0320 00:10:56.160384 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:56 crc kubenswrapper[5106]: E0320 00:10:56.160839 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.161410 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:10:56 crc kubenswrapper[5106]: E0320 00:10:56.161687 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.224355 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.224404 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.224417 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.224440 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.224454 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.326653 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.326990 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.327003 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.327020 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.327035 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.429759 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.429803 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.429815 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.429833 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.429846 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.532674 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.532954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.532965 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.532981 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.532991 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.545773 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" event={"ID":"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe","Type":"ContainerStarted","Data":"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.547673 5106 generic.go:358] "Generic (PLEG): container finished" podID="65fc70aa-db07-47cd-b307-36ca79bc3366" containerID="2fb37eec57ba4fb5712348b370d153ca81b2555bf26516a692541a3f66dc5450" exitCode=0 Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.547722 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerDied","Data":"2fb37eec57ba4fb5712348b370d153ca81b2555bf26516a692541a3f66dc5450"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.561489 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" podStartSLOduration=86.561415789 podStartE2EDuration="1m26.561415789s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:10:56.560915947 +0000 UTC m=+110.994650031" watchObservedRunningTime="2026-03-20 00:10:56.561415789 +0000 UTC m=+110.995149853" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.635485 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.635530 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.635543 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.635559 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.635593 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.737482 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.737514 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.737523 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.737536 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.737544 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.839122 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.839161 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.839173 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.839188 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.839197 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.940782 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.940822 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.940833 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.940849 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:56 crc kubenswrapper[5106]: I0320 00:10:56.940860 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:56Z","lastTransitionTime":"2026-03-20T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.042813 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.042865 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.042885 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.042909 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.042925 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.145134 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.145187 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.145200 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.145219 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.145235 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.161803 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.161887 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.162144 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.162199 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.246843 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.246880 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.246889 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.246903 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.246914 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.348938 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.349275 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.349287 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.349300 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.349310 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.451686 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.451731 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.451745 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.451758 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.451767 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.553565 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.553646 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.553660 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.553679 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.553691 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.569246 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.572771 5106 generic.go:358] "Generic (PLEG): container finished" podID="65fc70aa-db07-47cd-b307-36ca79bc3366" containerID="39f62348598f6fe95e12c6984fefbe0da11f7c0dabb58f2b40d08171c26c523c" exitCode=0 Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.573204 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerDied","Data":"39f62348598f6fe95e12c6984fefbe0da11f7c0dabb58f2b40d08171c26c523c"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.607123 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.607281 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607323 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.607294443 +0000 UTC m=+144.041028497 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.607395 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607406 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607427 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607440 5106 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.607440 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607503 5106 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607515 5106 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607505 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.607487098 +0000 UTC m=+144.041221152 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607624 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.607601721 +0000 UTC m=+144.041335775 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607645 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs podName:64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56 nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.607636462 +0000 UTC m=+144.041370516 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs") pod "network-metrics-daemon-5qf4l" (UID: "64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.607680 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.607719 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607891 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607886 5106 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607904 5106 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607917 5106 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607976 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.60795267 +0000 UTC m=+144.041686724 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 00:10:57 crc kubenswrapper[5106]: E0320 00:10:57.607992 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.607985701 +0000 UTC m=+144.041719755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.656169 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.656207 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.656216 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.656232 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.656241 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.757894 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.757931 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.757940 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.757954 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.757965 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.859603 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.859774 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.859783 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.859801 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.859810 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.961857 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.961898 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.961908 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.961922 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:57 crc kubenswrapper[5106]: I0320 00:10:57.961933 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:57Z","lastTransitionTime":"2026-03-20T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.064329 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.064370 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.064397 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.064411 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.064420 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.160761 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.160788 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:10:58 crc kubenswrapper[5106]: E0320 00:10:58.161137 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:10:58 crc kubenswrapper[5106]: E0320 00:10:58.161232 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.167360 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.167401 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.167413 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.167430 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.167441 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.269781 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.269845 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.269868 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.269898 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.269916 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.372625 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.372698 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.372722 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.372751 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.372770 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.476167 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.476368 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.476393 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.476439 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.476458 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.578025 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.578069 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.578079 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.578093 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.578102 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.580845 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerStarted","Data":"846408a70af89834134aad216a2c6fa2ed25b39dd734dca38cd2e2df08492104"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.682159 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.682215 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.682226 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.682243 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.682255 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.784861 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.784916 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.784932 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.784953 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.784963 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.886781 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.886829 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.886840 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.886856 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.886870 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.963269 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.963328 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.963338 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.963353 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 00:10:58 crc kubenswrapper[5106]: I0320 00:10:58.963362 5106 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T00:10:58Z","lastTransitionTime":"2026-03-20T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.018176 5106 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.018892 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj"] Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.026953 5106 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.137949 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.140417 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.140550 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.141371 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.143152 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.161283 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:10:59 crc kubenswrapper[5106]: E0320 00:10:59.161410 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.161552 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:10:59 crc kubenswrapper[5106]: E0320 00:10:59.161730 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.226735 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f66104d4-bdb0-4109-b301-cfe281a06c69-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.226810 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f66104d4-bdb0-4109-b301-cfe281a06c69-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.227018 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f66104d4-bdb0-4109-b301-cfe281a06c69-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.227062 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f66104d4-bdb0-4109-b301-cfe281a06c69-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.227087 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f66104d4-bdb0-4109-b301-cfe281a06c69-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.328560 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f66104d4-bdb0-4109-b301-cfe281a06c69-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.328831 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f66104d4-bdb0-4109-b301-cfe281a06c69-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.328848 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f66104d4-bdb0-4109-b301-cfe281a06c69-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.328868 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f66104d4-bdb0-4109-b301-cfe281a06c69-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.328897 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f66104d4-bdb0-4109-b301-cfe281a06c69-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.329605 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f66104d4-bdb0-4109-b301-cfe281a06c69-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.329654 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f66104d4-bdb0-4109-b301-cfe281a06c69-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.330437 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f66104d4-bdb0-4109-b301-cfe281a06c69-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.334446 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f66104d4-bdb0-4109-b301-cfe281a06c69-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.350192 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f66104d4-bdb0-4109-b301-cfe281a06c69-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-6btdj\" (UID: \"f66104d4-bdb0-4109-b301-cfe281a06c69\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.452800 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" Mar 20 00:10:59 crc kubenswrapper[5106]: W0320 00:10:59.463865 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf66104d4_bdb0_4109_b301_cfe281a06c69.slice/crio-e7ee2f333fff6f33a94521a2372fe6cb928416a5793f94874c3b03a216f45d83 WatchSource:0}: Error finding container e7ee2f333fff6f33a94521a2372fe6cb928416a5793f94874c3b03a216f45d83: Status 404 returned error can't find the container with id e7ee2f333fff6f33a94521a2372fe6cb928416a5793f94874c3b03a216f45d83 Mar 20 00:10:59 crc kubenswrapper[5106]: I0320 00:10:59.586007 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" event={"ID":"f66104d4-bdb0-4109-b301-cfe281a06c69","Type":"ContainerStarted","Data":"e7ee2f333fff6f33a94521a2372fe6cb928416a5793f94874c3b03a216f45d83"} Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.160809 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.160875 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:00 crc kubenswrapper[5106]: E0320 00:11:00.160953 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:00 crc kubenswrapper[5106]: E0320 00:11:00.161023 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.594649 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerStarted","Data":"0c085f3e5a57eee1a558eb14c8d707dd271557ce447c84bbcd4949881723922b"} Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.594986 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.594997 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.595006 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.598365 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" event={"ID":"f66104d4-bdb0-4109-b301-cfe281a06c69","Type":"ContainerStarted","Data":"80bb835a8c90cc53b2ba98cf81567e5dad5107bc22d17c3a8e4d86ebd48ef533"} Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.624473 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podStartSLOduration=90.624443942 podStartE2EDuration="1m30.624443942s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:00.624415012 +0000 UTC m=+115.058149096" watchObservedRunningTime="2026-03-20 00:11:00.624443942 +0000 UTC m=+115.058178036" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.631420 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.633328 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:11:00 crc kubenswrapper[5106]: I0320 00:11:00.636822 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-6btdj" podStartSLOduration=90.636805925 podStartE2EDuration="1m30.636805925s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:00.636358184 +0000 UTC m=+115.070092238" watchObservedRunningTime="2026-03-20 00:11:00.636805925 +0000 UTC m=+115.070540019" Mar 20 00:11:01 crc kubenswrapper[5106]: I0320 00:11:01.160463 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:01 crc kubenswrapper[5106]: I0320 00:11:01.160475 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:01 crc kubenswrapper[5106]: E0320 00:11:01.160656 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:01 crc kubenswrapper[5106]: E0320 00:11:01.160707 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:01 crc kubenswrapper[5106]: I0320 00:11:01.605489 5106 generic.go:358] "Generic (PLEG): container finished" podID="65fc70aa-db07-47cd-b307-36ca79bc3366" containerID="846408a70af89834134aad216a2c6fa2ed25b39dd734dca38cd2e2df08492104" exitCode=0 Mar 20 00:11:01 crc kubenswrapper[5106]: I0320 00:11:01.605626 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerDied","Data":"846408a70af89834134aad216a2c6fa2ed25b39dd734dca38cd2e2df08492104"} Mar 20 00:11:02 crc kubenswrapper[5106]: I0320 00:11:02.160101 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:02 crc kubenswrapper[5106]: E0320 00:11:02.160236 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:02 crc kubenswrapper[5106]: I0320 00:11:02.160333 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:02 crc kubenswrapper[5106]: E0320 00:11:02.160542 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:03 crc kubenswrapper[5106]: I0320 00:11:03.160032 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:03 crc kubenswrapper[5106]: I0320 00:11:03.160126 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:03 crc kubenswrapper[5106]: E0320 00:11:03.160476 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:03 crc kubenswrapper[5106]: E0320 00:11:03.160867 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:03 crc kubenswrapper[5106]: I0320 00:11:03.587869 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qf4l"] Mar 20 00:11:03 crc kubenswrapper[5106]: I0320 00:11:03.588000 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:03 crc kubenswrapper[5106]: E0320 00:11:03.588092 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:03 crc kubenswrapper[5106]: I0320 00:11:03.614329 5106 generic.go:358] "Generic (PLEG): container finished" podID="65fc70aa-db07-47cd-b307-36ca79bc3366" containerID="3aec8ca3f6d189f8cdf2dc241473e4283219008023d3655b96a0dcd9eace5600" exitCode=0 Mar 20 00:11:03 crc kubenswrapper[5106]: I0320 00:11:03.614388 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerDied","Data":"3aec8ca3f6d189f8cdf2dc241473e4283219008023d3655b96a0dcd9eace5600"} Mar 20 00:11:04 crc kubenswrapper[5106]: I0320 00:11:04.160303 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:04 crc kubenswrapper[5106]: E0320 00:11:04.160447 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:05 crc kubenswrapper[5106]: I0320 00:11:05.160638 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:05 crc kubenswrapper[5106]: I0320 00:11:05.160706 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:05 crc kubenswrapper[5106]: I0320 00:11:05.160744 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:05 crc kubenswrapper[5106]: E0320 00:11:05.161182 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:05 crc kubenswrapper[5106]: E0320 00:11:05.160973 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:05 crc kubenswrapper[5106]: E0320 00:11:05.161267 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:05 crc kubenswrapper[5106]: I0320 00:11:05.626632 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" event={"ID":"65fc70aa-db07-47cd-b307-36ca79bc3366","Type":"ContainerStarted","Data":"acfc2a54d62ee5cd248cf58e54e7af8ab2dcc5276963b91a2254a22ce50383c2"} Mar 20 00:11:05 crc kubenswrapper[5106]: I0320 00:11:05.659533 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wwnpd" podStartSLOduration=95.659491061 podStartE2EDuration="1m35.659491061s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:05.657367447 +0000 UTC m=+120.091101532" watchObservedRunningTime="2026-03-20 00:11:05.659491061 +0000 UTC m=+120.093225155" Mar 20 00:11:06 crc kubenswrapper[5106]: I0320 00:11:06.160762 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:06 crc kubenswrapper[5106]: E0320 00:11:06.161044 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:06 crc kubenswrapper[5106]: E0320 00:11:06.994540 5106 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Mar 20 00:11:07 crc kubenswrapper[5106]: I0320 00:11:07.161881 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:07 crc kubenswrapper[5106]: E0320 00:11:07.161983 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:07 crc kubenswrapper[5106]: I0320 00:11:07.162117 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:07 crc kubenswrapper[5106]: E0320 00:11:07.162165 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:07 crc kubenswrapper[5106]: I0320 00:11:07.162260 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:07 crc kubenswrapper[5106]: E0320 00:11:07.162321 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:08 crc kubenswrapper[5106]: I0320 00:11:08.160668 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:08 crc kubenswrapper[5106]: E0320 00:11:08.160886 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:08 crc kubenswrapper[5106]: E0320 00:11:08.447208 5106 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 00:11:09 crc kubenswrapper[5106]: I0320 00:11:09.160194 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:09 crc kubenswrapper[5106]: E0320 00:11:09.160334 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:09 crc kubenswrapper[5106]: I0320 00:11:09.160369 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:09 crc kubenswrapper[5106]: I0320 00:11:09.160391 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:09 crc kubenswrapper[5106]: E0320 00:11:09.160639 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:09 crc kubenswrapper[5106]: E0320 00:11:09.160809 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:10 crc kubenswrapper[5106]: I0320 00:11:10.159943 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:10 crc kubenswrapper[5106]: E0320 00:11:10.160076 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.161004 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.161022 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.161004 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:11 crc kubenswrapper[5106]: E0320 00:11:11.161156 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:11 crc kubenswrapper[5106]: E0320 00:11:11.161244 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:11 crc kubenswrapper[5106]: E0320 00:11:11.161442 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.162145 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.646614 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.648231 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"fefd59f795733cb8744cc94f38c6a15e90eef9eb0e9824f61c74ad917a5fce4b"} Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.648744 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:11:11 crc kubenswrapper[5106]: I0320 00:11:11.676212 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=47.676188024 podStartE2EDuration="47.676188024s" podCreationTimestamp="2026-03-20 00:10:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:11.675145647 +0000 UTC m=+126.108879701" watchObservedRunningTime="2026-03-20 00:11:11.676188024 +0000 UTC m=+126.109922118" Mar 20 00:11:12 crc kubenswrapper[5106]: I0320 00:11:12.160441 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:12 crc kubenswrapper[5106]: E0320 00:11:12.160628 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Mar 20 00:11:13 crc kubenswrapper[5106]: I0320 00:11:13.160234 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:13 crc kubenswrapper[5106]: E0320 00:11:13.160355 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Mar 20 00:11:13 crc kubenswrapper[5106]: I0320 00:11:13.160436 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:13 crc kubenswrapper[5106]: I0320 00:11:13.160522 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:13 crc kubenswrapper[5106]: E0320 00:11:13.160718 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5qf4l" podUID="64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56" Mar 20 00:11:13 crc kubenswrapper[5106]: E0320 00:11:13.160822 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Mar 20 00:11:14 crc kubenswrapper[5106]: I0320 00:11:14.160418 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:14 crc kubenswrapper[5106]: I0320 00:11:14.164422 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Mar 20 00:11:14 crc kubenswrapper[5106]: I0320 00:11:14.164843 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.160565 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.160948 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.161009 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.163559 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.163963 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.164248 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Mar 20 00:11:15 crc kubenswrapper[5106]: I0320 00:11:15.164351 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.080134 5106 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.134094 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-8jhlx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.207596 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.213672 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.214830 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.225922 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.226066 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.226184 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.226468 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.226662 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.226830 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.227112 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.227300 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.227528 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.227752 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.230765 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-cp4kp"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.235260 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.246822 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29566080-czff6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.247203 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248750 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af6748b-028f-4ab7-8039-b29223f0f492-config\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248791 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22b44ae-8b94-4d76-9211-859b665d08cb-config\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248815 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a22b44ae-8b94-4d76-9211-859b665d08cb-images\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248837 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a22b44ae-8b94-4d76-9211-859b665d08cb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248886 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af6748b-028f-4ab7-8039-b29223f0f492-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248921 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9672n\" (UniqueName: \"kubernetes.io/projected/a22b44ae-8b94-4d76-9211-859b665d08cb-kube-api-access-9672n\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248955 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0af6748b-028f-4ab7-8039-b29223f0f492-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.248995 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzqpb\" (UniqueName: \"kubernetes.io/projected/0af6748b-028f-4ab7-8039-b29223f0f492-kube-api-access-hzqpb\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.259321 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.259395 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.259600 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.259706 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.259719 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.260146 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.267898 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.268593 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.275964 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.278367 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.281967 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.282181 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.285684 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wqms8"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.288041 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.289960 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.292187 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.292192 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.294806 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.294819 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.294849 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.305817 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-vx9v6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.305987 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.306704 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.311067 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.312085 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.312494 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.313083 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.313298 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.313329 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.314129 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-svc7c"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.314348 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.314484 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.315526 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.315765 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.316005 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.315776 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.316505 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.316517 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.321292 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.316827 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.316973 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.317073 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.317127 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.317299 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.317013 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.317376 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.326214 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.327253 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.327995 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.328687 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.330497 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.330996 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.331256 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.331708 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.331925 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.332204 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.336488 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qxnjl"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.336660 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.337037 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.337293 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.337522 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.338561 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.338873 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.341701 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.341737 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.343336 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-zbpp6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.343340 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.343448 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.346834 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.346901 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.347766 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.349217 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.349255 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.349547 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.349290 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350105 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350383 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r956k\" (UniqueName: \"kubernetes.io/projected/8539a810-4a95-4205-99c6-30b6362cfa01-kube-api-access-r956k\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350410 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-client-ca\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350425 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-client-ca\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350443 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/884b9b2b-1ff2-4758-b964-5030e8973573-serviceca\") pod \"image-pruner-29566080-czff6\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350461 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-config\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350485 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-kube-api-access-45sxg\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350504 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af6748b-028f-4ab7-8039-b29223f0f492-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350527 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8539a810-4a95-4205-99c6-30b6362cfa01-tmp\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350543 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2722b7b1-fe01-4f55-8114-86b441329659-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350559 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gmf\" (UniqueName: \"kubernetes.io/projected/2722b7b1-fe01-4f55-8114-86b441329659-kube-api-access-t2gmf\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350597 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0af6748b-028f-4ab7-8039-b29223f0f492-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350613 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-serving-cert\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350627 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-serving-cert\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350642 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2722b7b1-fe01-4f55-8114-86b441329659-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350659 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-image-import-ca\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350675 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350691 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-audit\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350705 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350722 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqtc4\" (UniqueName: \"kubernetes.io/projected/74f7b3bf-429d-4b60-8b80-48300a789b1d-kube-api-access-qqtc4\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350747 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a22b44ae-8b94-4d76-9211-859b665d08cb-images\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350763 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a22b44ae-8b94-4d76-9211-859b665d08cb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350778 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-trusted-ca-bundle\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350799 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-config\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350813 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-service-ca\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350833 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-audit-dir\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350851 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af6748b-028f-4ab7-8039-b29223f0f492-config\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350866 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22b44ae-8b94-4d76-9211-859b665d08cb-config\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350881 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpmt\" (UniqueName: \"kubernetes.io/projected/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-kube-api-access-bcpmt\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350897 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/2722b7b1-fe01-4f55-8114-86b441329659-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350914 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4123c23b-ea73-40e1-965a-5b1777b4e2be-serving-cert\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350932 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c10c417-d8e5-4933-96b6-3a365ea480f3-config\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350946 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slslt\" (UniqueName: \"kubernetes.io/projected/2c10c417-d8e5-4933-96b6-3a365ea480f3-kube-api-access-slslt\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350961 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-etcd-client\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350979 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.350995 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-serving-cert\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351013 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9672n\" (UniqueName: \"kubernetes.io/projected/a22b44ae-8b94-4d76-9211-859b665d08cb-kube-api-access-9672n\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351066 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-encryption-config\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351084 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-oauth-serving-cert\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351100 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c10c417-d8e5-4933-96b6-3a365ea480f3-machine-approver-tls\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351116 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwln4\" (UniqueName: \"kubernetes.io/projected/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-kube-api-access-vwln4\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351131 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-oauth-config\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351149 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351165 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2722b7b1-fe01-4f55-8114-86b441329659-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351182 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-config\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351204 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzqpb\" (UniqueName: \"kubernetes.io/projected/0af6748b-028f-4ab7-8039-b29223f0f492-kube-api-access-hzqpb\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351219 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4123c23b-ea73-40e1-965a-5b1777b4e2be-available-featuregates\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351244 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2722b7b1-fe01-4f55-8114-86b441329659-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351270 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c10c417-d8e5-4933-96b6-3a365ea480f3-auth-proxy-config\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351284 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351299 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8539a810-4a95-4205-99c6-30b6362cfa01-serving-cert\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351315 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57s6r\" (UniqueName: \"kubernetes.io/projected/884b9b2b-1ff2-4758-b964-5030e8973573-kube-api-access-57s6r\") pod \"image-pruner-29566080-czff6\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351335 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74cg\" (UniqueName: \"kubernetes.io/projected/4123c23b-ea73-40e1-965a-5b1777b4e2be-kube-api-access-t74cg\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351350 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-config\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351366 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-config\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.351382 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-tmp\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.353113 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0af6748b-028f-4ab7-8039-b29223f0f492-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.355271 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a22b44ae-8b94-4d76-9211-859b665d08cb-config\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.355968 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0af6748b-028f-4ab7-8039-b29223f0f492-config\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.356654 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.357347 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.357514 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a22b44ae-8b94-4d76-9211-859b665d08cb-images\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.358648 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.359128 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0af6748b-028f-4ab7-8039-b29223f0f492-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.359294 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a22b44ae-8b94-4d76-9211-859b665d08cb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.361350 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-t49vx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.362003 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.362381 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.362592 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.362709 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.362829 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.363136 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.363229 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.363378 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.363514 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.363767 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.361409 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.364303 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.364950 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.365415 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.367447 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.367690 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.367902 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.368130 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.368393 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.368528 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.368686 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.368849 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.369458 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.370302 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.374353 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.374504 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jhgps"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.374638 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.375409 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzqpb\" (UniqueName: \"kubernetes.io/projected/0af6748b-028f-4ab7-8039-b29223f0f492-kube-api-access-hzqpb\") pod \"openshift-controller-manager-operator-686468bdd5-ngmrb\" (UID: \"0af6748b-028f-4ab7-8039-b29223f0f492\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.377683 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.377962 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.379200 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.384592 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xfn66"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.386149 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.389284 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.389348 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.391814 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-72hsj"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.391930 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.392445 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9672n\" (UniqueName: \"kubernetes.io/projected/a22b44ae-8b94-4d76-9211-859b665d08cb-kube-api-access-9672n\") pod \"machine-api-operator-755bb95488-8jhlx\" (UID: \"a22b44ae-8b94-4d76-9211-859b665d08cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.394689 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.394939 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.396246 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.397468 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.401675 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.401764 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.404182 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-vzb7m"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.404249 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.406477 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.406797 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.409049 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.409142 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.411388 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.413758 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.413849 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.413854 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.415971 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.416700 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.416814 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.421199 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.421289 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.423818 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.423893 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.426378 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.426517 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.429708 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-5lqg7"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.429982 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.434863 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.435186 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.436167 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.440988 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.441467 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.446442 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.446514 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.449101 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.449354 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.451805 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-ss8gd"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.451941 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452222 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4123c23b-ea73-40e1-965a-5b1777b4e2be-serving-cert\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452255 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c10c417-d8e5-4933-96b6-3a365ea480f3-config\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452273 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-slslt\" (UniqueName: \"kubernetes.io/projected/2c10c417-d8e5-4933-96b6-3a365ea480f3-kube-api-access-slslt\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452291 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-etcd-client\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452307 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452324 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-serving-cert\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452341 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-encryption-config\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452355 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-oauth-serving-cert\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452384 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c10c417-d8e5-4933-96b6-3a365ea480f3-machine-approver-tls\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452402 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwln4\" (UniqueName: \"kubernetes.io/projected/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-kube-api-access-vwln4\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452417 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-oauth-config\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452434 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452452 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2722b7b1-fe01-4f55-8114-86b441329659-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452475 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-config\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452492 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4123c23b-ea73-40e1-965a-5b1777b4e2be-available-featuregates\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452508 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2722b7b1-fe01-4f55-8114-86b441329659-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452531 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c10c417-d8e5-4933-96b6-3a365ea480f3-auth-proxy-config\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452547 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452562 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8539a810-4a95-4205-99c6-30b6362cfa01-serving-cert\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452598 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57s6r\" (UniqueName: \"kubernetes.io/projected/884b9b2b-1ff2-4758-b964-5030e8973573-kube-api-access-57s6r\") pod \"image-pruner-29566080-czff6\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452621 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t74cg\" (UniqueName: \"kubernetes.io/projected/4123c23b-ea73-40e1-965a-5b1777b4e2be-kube-api-access-t74cg\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452640 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-config\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452661 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-config\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452681 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-tmp\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452700 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r956k\" (UniqueName: \"kubernetes.io/projected/8539a810-4a95-4205-99c6-30b6362cfa01-kube-api-access-r956k\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452718 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-client-ca\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452734 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-client-ca\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452750 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/884b9b2b-1ff2-4758-b964-5030e8973573-serviceca\") pod \"image-pruner-29566080-czff6\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452769 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-config\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452798 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-kube-api-access-45sxg\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452826 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8539a810-4a95-4205-99c6-30b6362cfa01-tmp\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452848 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2722b7b1-fe01-4f55-8114-86b441329659-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452864 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t2gmf\" (UniqueName: \"kubernetes.io/projected/2722b7b1-fe01-4f55-8114-86b441329659-kube-api-access-t2gmf\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452890 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-serving-cert\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452910 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-serving-cert\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452934 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2722b7b1-fe01-4f55-8114-86b441329659-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452953 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-image-import-ca\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452971 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.452989 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-audit\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453007 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453023 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqtc4\" (UniqueName: \"kubernetes.io/projected/74f7b3bf-429d-4b60-8b80-48300a789b1d-kube-api-access-qqtc4\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453051 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-trusted-ca-bundle\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453078 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-config\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453094 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-service-ca\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453114 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-audit-dir\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453135 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcpmt\" (UniqueName: \"kubernetes.io/projected/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-kube-api-access-bcpmt\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453153 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/2722b7b1-fe01-4f55-8114-86b441329659-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.453608 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/2722b7b1-fe01-4f55-8114-86b441329659-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.454299 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-tmp\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.454364 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c10c417-d8e5-4933-96b6-3a365ea480f3-config\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.454753 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.455095 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-audit-dir\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.456390 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-config\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.456764 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4123c23b-ea73-40e1-965a-5b1777b4e2be-available-featuregates\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457169 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-service-ca\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457175 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2722b7b1-fe01-4f55-8114-86b441329659-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457255 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-audit\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457342 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-config\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457417 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457461 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-node-pullsecrets\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457472 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457499 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-trusted-ca-bundle\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457506 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2722b7b1-fe01-4f55-8114-86b441329659-tmp\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.457642 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-client-ca\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.458039 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-client-ca\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.458340 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-oauth-serving-cert\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.458486 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.458978 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2c10c417-d8e5-4933-96b6-3a365ea480f3-auth-proxy-config\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459033 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-2xds7"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459212 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-image-import-ca\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459226 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459325 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/884b9b2b-1ff2-4758-b964-5030e8973573-serviceca\") pod \"image-pruner-29566080-czff6\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459110 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-etcd-client\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459373 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-config\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.459866 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-config\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.460423 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8539a810-4a95-4205-99c6-30b6362cfa01-serving-cert\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.460643 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-encryption-config\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.461022 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4123c23b-ea73-40e1-965a-5b1777b4e2be-serving-cert\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.461357 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-config\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.461541 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-serving-cert\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.461780 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-serving-cert\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.462082 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-serving-cert\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.462100 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.462684 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/74f7b3bf-429d-4b60-8b80-48300a789b1d-console-oauth-config\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.463447 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2c10c417-d8e5-4933-96b6-3a365ea480f3-machine-approver-tls\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.463516 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2722b7b1-fe01-4f55-8114-86b441329659-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.464041 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8539a810-4a95-4205-99c6-30b6362cfa01-tmp\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.473846 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-8jhlx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.474004 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29566080-czff6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.473946 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.474074 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-cp4kp"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.474270 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.474285 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-svc7c"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.474296 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8fdp6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.476071 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479506 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479538 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479553 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479564 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-t49vx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479602 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479618 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479630 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xfn66"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479643 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479653 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-zbpp6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479663 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-72hsj"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479674 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qxnjl"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479685 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479696 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wqms8"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479709 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jhgps"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479720 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479732 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479742 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479752 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-vx9v6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479759 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.479765 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dg59t"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.483314 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hdf7z"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.483681 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489401 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489426 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489439 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489450 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489460 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-5lqg7"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489470 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ss8gd"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489479 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489488 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489497 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489512 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489558 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dg59t"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489572 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489784 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489793 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489806 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-2xds7"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489816 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8fdp6"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489824 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.489860 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q5tjt"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.492823 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q5tjt"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.492844 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8jsvn"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.492900 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.496462 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.497613 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.515290 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.536683 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.540029 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.545928 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.586515 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.602780 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.616202 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.636399 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.657465 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.676923 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.696720 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.716958 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.728057 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-8jhlx"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.732290 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb"] Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.736232 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Mar 20 00:11:19 crc kubenswrapper[5106]: W0320 00:11:19.739688 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda22b44ae_8b94_4d76_9211_859b665d08cb.slice/crio-a5dec4f66379e40f1cb072c430ee8ec2c02bd8cb3e181ec2f9883fce13903bc0 WatchSource:0}: Error finding container a5dec4f66379e40f1cb072c430ee8ec2c02bd8cb3e181ec2f9883fce13903bc0: Status 404 returned error can't find the container with id a5dec4f66379e40f1cb072c430ee8ec2c02bd8cb3e181ec2f9883fce13903bc0 Mar 20 00:11:19 crc kubenswrapper[5106]: W0320 00:11:19.740404 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0af6748b_028f_4ab7_8039_b29223f0f492.slice/crio-9eda29195b222e4d4a8a796cb59b716db874d6dde92f20e856e40bdb4493abaa WatchSource:0}: Error finding container 9eda29195b222e4d4a8a796cb59b716db874d6dde92f20e856e40bdb4493abaa: Status 404 returned error can't find the container with id 9eda29195b222e4d4a8a796cb59b716db874d6dde92f20e856e40bdb4493abaa Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.756482 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.776710 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.797047 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.816225 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.835923 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.857343 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.882866 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.896939 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.917915 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.936566 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.955465 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.976713 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Mar 20 00:11:19 crc kubenswrapper[5106]: I0320 00:11:19.996249 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.016542 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.036147 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.056267 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.076395 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.096129 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.116642 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.136709 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.156062 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.176818 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.195956 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.216608 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.236252 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.256100 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.276297 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.297079 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.316622 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.336064 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.356078 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.376070 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.395895 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.414673 5106 request.go:752] "Waited before sending request" delay="1.005291614s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-tnfx9&limit=500&resourceVersion=0" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.416079 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.435988 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.457387 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.476624 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.496260 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.515900 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.535800 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.556485 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.576638 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.596903 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.616863 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.636760 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.656800 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.675518 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.691064 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" event={"ID":"0af6748b-028f-4ab7-8039-b29223f0f492","Type":"ContainerStarted","Data":"d746ea24fe6789311f9e52d6a5a65443375cfad70ac076d524cbc276f797461e"} Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.691110 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" event={"ID":"0af6748b-028f-4ab7-8039-b29223f0f492","Type":"ContainerStarted","Data":"9eda29195b222e4d4a8a796cb59b716db874d6dde92f20e856e40bdb4493abaa"} Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.692567 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" event={"ID":"a22b44ae-8b94-4d76-9211-859b665d08cb","Type":"ContainerStarted","Data":"8593e7a8c9b040cf1273082f76b2ade5e6a03c31c3155c2becc59bbd6b70b270"} Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.692627 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" event={"ID":"a22b44ae-8b94-4d76-9211-859b665d08cb","Type":"ContainerStarted","Data":"288e8763b9cd6367d6cec9a1d5d5cb07998f6e25c3d5334b008d154b64e96ebf"} Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.692642 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" event={"ID":"a22b44ae-8b94-4d76-9211-859b665d08cb","Type":"ContainerStarted","Data":"a5dec4f66379e40f1cb072c430ee8ec2c02bd8cb3e181ec2f9883fce13903bc0"} Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.696413 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.716140 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.735821 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.757095 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.777102 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.795816 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.816877 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.836719 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.856106 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.875844 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.896622 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.916245 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.936250 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.975897 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Mar 20 00:11:20 crc kubenswrapper[5106]: I0320 00:11:20.996210 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.016128 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.036483 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.057140 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.075853 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.103269 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.115914 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.136688 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.156320 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.196111 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-slslt\" (UniqueName: \"kubernetes.io/projected/2c10c417-d8e5-4933-96b6-3a365ea480f3-kube-api-access-slslt\") pod \"machine-approver-54c688565-jbnnd\" (UID: \"2c10c417-d8e5-4933-96b6-3a365ea480f3\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.203706 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.218906 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcpmt\" (UniqueName: \"kubernetes.io/projected/17377ffd-aa79-4dee-bfea-6ae6b3026fd1-kube-api-access-bcpmt\") pod \"apiserver-9ddfb9f55-wqms8\" (UID: \"17377ffd-aa79-4dee-bfea-6ae6b3026fd1\") " pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:21 crc kubenswrapper[5106]: W0320 00:11:21.222139 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c10c417_d8e5_4933_96b6_3a365ea480f3.slice/crio-c9906a09a977f9cdb04b469baf0379d7e24cfab101858c3afca6efc5802331c1 WatchSource:0}: Error finding container c9906a09a977f9cdb04b469baf0379d7e24cfab101858c3afca6efc5802331c1: Status 404 returned error can't find the container with id c9906a09a977f9cdb04b469baf0379d7e24cfab101858c3afca6efc5802331c1 Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.236183 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57s6r\" (UniqueName: \"kubernetes.io/projected/884b9b2b-1ff2-4758-b964-5030e8973573-kube-api-access-57s6r\") pod \"image-pruner-29566080-czff6\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.250260 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r956k\" (UniqueName: \"kubernetes.io/projected/8539a810-4a95-4205-99c6-30b6362cfa01-kube-api-access-r956k\") pod \"route-controller-manager-776cdc94d6-crd8g\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.272304 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqtc4\" (UniqueName: \"kubernetes.io/projected/74f7b3bf-429d-4b60-8b80-48300a789b1d-kube-api-access-qqtc4\") pod \"console-64d44f6ddf-vx9v6\" (UID: \"74f7b3bf-429d-4b60-8b80-48300a789b1d\") " pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.296247 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74cg\" (UniqueName: \"kubernetes.io/projected/4123c23b-ea73-40e1-965a-5b1777b4e2be-kube-api-access-t74cg\") pod \"openshift-config-operator-5777786469-svc7c\" (UID: \"4123c23b-ea73-40e1-965a-5b1777b4e2be\") " pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.310704 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwln4\" (UniqueName: \"kubernetes.io/projected/0ff8b42b-9c00-4f62-bc0c-4d14276cfb63-kube-api-access-vwln4\") pod \"openshift-apiserver-operator-846cbfc458-x2xbl\" (UID: \"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.330636 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-kube-api-access-45sxg\") pod \"controller-manager-65b6cccf98-cp4kp\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.350213 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2gmf\" (UniqueName: \"kubernetes.io/projected/2722b7b1-fe01-4f55-8114-86b441329659-kube-api-access-t2gmf\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.368115 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.371010 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2722b7b1-fe01-4f55-8114-86b441329659-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-fdqzr\" (UID: \"2722b7b1-fe01-4f55-8114-86b441329659\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.376693 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.396893 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.397838 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.407765 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.416658 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.434890 5106 request.go:752] "Waited before sending request" delay="1.960525273s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.436567 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.442114 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.456798 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.457204 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.476490 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.487712 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.496036 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.510974 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.517624 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.517799 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.541555 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.557264 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.578248 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.584872 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-cp4kp"] Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.596352 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Mar 20 00:11:21 crc kubenswrapper[5106]: W0320 00:11:21.613272 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ae55f47_30bb_45c6_bd6c_7fa0c7810d38.slice/crio-aab7aff0f9121715749223b0f541a236f4c1cacb3031269d8a82f89ff4d47c41 WatchSource:0}: Error finding container aab7aff0f9121715749223b0f541a236f4c1cacb3031269d8a82f89ff4d47c41: Status 404 returned error can't find the container with id aab7aff0f9121715749223b0f541a236f4c1cacb3031269d8a82f89ff4d47c41 Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.617130 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.630557 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29566080-czff6"] Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.635923 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.650947 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g"] Mar 20 00:11:21 crc kubenswrapper[5106]: W0320 00:11:21.652745 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod884b9b2b_1ff2_4758_b964_5030e8973573.slice/crio-31b7e39aa5aafabf81b9c417bbb28dbd1dce1245a51df2fc8f51d0ca88e68ffd WatchSource:0}: Error finding container 31b7e39aa5aafabf81b9c417bbb28dbd1dce1245a51df2fc8f51d0ca88e68ffd: Status 404 returned error can't find the container with id 31b7e39aa5aafabf81b9c417bbb28dbd1dce1245a51df2fc8f51d0ca88e68ffd Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.656866 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.679620 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.698203 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.698722 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr"] Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.712821 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" event={"ID":"8539a810-4a95-4205-99c6-30b6362cfa01","Type":"ContainerStarted","Data":"05f7bfed10d161aa463707c6e400bb710f9cb7be8b557d6d665550e52b441988"} Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.716249 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" event={"ID":"2c10c417-d8e5-4933-96b6-3a365ea480f3","Type":"ContainerStarted","Data":"c7687814eb7d82b44a6237a510801520ee15f43ce1315e976ae84ebc2e334ba0"} Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.716304 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" event={"ID":"2c10c417-d8e5-4933-96b6-3a365ea480f3","Type":"ContainerStarted","Data":"c9906a09a977f9cdb04b469baf0379d7e24cfab101858c3afca6efc5802331c1"} Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.716996 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.718689 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29566080-czff6" event={"ID":"884b9b2b-1ff2-4758-b964-5030e8973573","Type":"ContainerStarted","Data":"31b7e39aa5aafabf81b9c417bbb28dbd1dce1245a51df2fc8f51d0ca88e68ffd"} Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.721206 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" event={"ID":"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38","Type":"ContainerStarted","Data":"aab7aff0f9121715749223b0f541a236f4c1cacb3031269d8a82f89ff4d47c41"} Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.735262 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-wqms8"] Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.736991 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Mar 20 00:11:21 crc kubenswrapper[5106]: W0320 00:11:21.747822 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17377ffd_aa79_4dee_bfea_6ae6b3026fd1.slice/crio-b8244c61fff1c339754201560b82c50d0b1cdcf11cbb4bf30fd8b98a4d5501d6 WatchSource:0}: Error finding container b8244c61fff1c339754201560b82c50d0b1cdcf11cbb4bf30fd8b98a4d5501d6: Status 404 returned error can't find the container with id b8244c61fff1c339754201560b82c50d0b1cdcf11cbb4bf30fd8b98a4d5501d6 Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.755545 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-vx9v6"] Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.758275 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.768785 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-svc7c"] Mar 20 00:11:21 crc kubenswrapper[5106]: W0320 00:11:21.777129 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74f7b3bf_429d_4b60_8b80_48300a789b1d.slice/crio-53f370a6438274e7a4c6592732b681ada0f8df96bef1d95f6e598a0913aa7320 WatchSource:0}: Error finding container 53f370a6438274e7a4c6592732b681ada0f8df96bef1d95f6e598a0913aa7320: Status 404 returned error can't find the container with id 53f370a6438274e7a4c6592732b681ada0f8df96bef1d95f6e598a0913aa7320 Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.800769 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl"] Mar 20 00:11:21 crc kubenswrapper[5106]: W0320 00:11:21.801277 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4123c23b_ea73_40e1_965a_5b1777b4e2be.slice/crio-205061e1ef5943e0f6701fd9f0b466eff3f0ac3066f2d1c397dd86253a28094a WatchSource:0}: Error finding container 205061e1ef5943e0f6701fd9f0b466eff3f0ac3066f2d1c397dd86253a28094a: Status 404 returned error can't find the container with id 205061e1ef5943e0f6701fd9f0b466eff3f0ac3066f2d1c397dd86253a28094a Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879397 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-images\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879438 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba3de43-9844-4e15-b900-5a48bac6f058-config\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879464 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61dc866d-abfc-4dea-a349-6635b614e189-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-vsgrz\" (UID: \"61dc866d-abfc-4dea-a349-6635b614e189\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879663 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879702 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879728 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba3de43-9844-4e15-b900-5a48bac6f058-trusted-ca\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879747 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4kmk\" (UniqueName: \"kubernetes.io/projected/7724e258-4050-4d7e-83c9-40b6dec81d33-kube-api-access-l4kmk\") pod \"multus-admission-controller-69db94689b-5lqg7\" (UID: \"7724e258-4050-4d7e-83c9-40b6dec81d33\") " pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879768 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6nq\" (UniqueName: \"kubernetes.io/projected/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-kube-api-access-wd6nq\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879785 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44gxl\" (UniqueName: \"kubernetes.io/projected/e63e177b-5ff9-4662-be8b-4b193c72fc72-kube-api-access-44gxl\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.879810 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/992000e3-50f4-48fa-8a55-58bfade85d0c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-w2brd\" (UID: \"992000e3-50f4-48fa-8a55-58bfade85d0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880333 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880531 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880625 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880740 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdmn\" (UniqueName: \"kubernetes.io/projected/59096bb7-5757-4196-96a5-f14e967998e7-kube-api-access-lrdmn\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880806 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: E0320 00:11:21.880827 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.380812744 +0000 UTC m=+136.814546798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880893 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-ca\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880913 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880939 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzx8\" (UniqueName: \"kubernetes.io/projected/88da1299-0802-4745-8701-7de465542299-kube-api-access-6lzx8\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880955 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880971 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/af8b1c72-0d76-40cc-9135-92bdefd2a461-kube-api-access-ml5xh\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.880986 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2lnq\" (UniqueName: \"kubernetes.io/projected/c0575414-b1d1-44db-b352-6f101cce8c8f-kube-api-access-r2lnq\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881009 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b3abc0-a538-458d-9975-c9b1a373ee95-config\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881024 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-audit-policies\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881042 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n2dj\" (UniqueName: \"kubernetes.io/projected/15d5d3b4-91df-49a0-9032-ebd865eacb5a-kube-api-access-7n2dj\") pod \"migrator-866fcbc849-h54ck\" (UID: \"15d5d3b4-91df-49a0-9032-ebd865eacb5a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881058 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00c02264-3068-4287-a30a-13b0003bf5e1-ca-trust-extracted\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881074 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-serving-cert\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881090 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-config\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881151 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzqr\" (UniqueName: \"kubernetes.io/projected/aba3de43-9844-4e15-b900-5a48bac6f058-kube-api-access-rlzqr\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881264 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-etcd-serving-ca\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881327 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b3abc0-a538-458d-9975-c9b1a373ee95-serving-cert\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881357 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-metrics-certs\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881493 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27db15f2-d153-4ecb-beb5-b139549dcb36-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sprmn\" (UID: \"27db15f2-d153-4ecb-beb5-b139549dcb36\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881534 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-registry-certificates\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881590 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88da1299-0802-4745-8701-7de465542299-config-volume\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881627 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e63e177b-5ff9-4662-be8b-4b193c72fc72-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881698 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4081cd08-5e12-4cca-bfd2-666bb6d87464-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881767 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xczxt\" (UniqueName: \"kubernetes.io/projected/94b3abc0-a538-458d-9975-c9b1a373ee95-kube-api-access-xczxt\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.881953 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884278 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-default-certificate\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884325 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf878343-818d-4ca7-a3ce-507df55ae4c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884524 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884556 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-kube-api-access\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884708 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-etcd-client\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884734 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-encryption-config\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884756 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7724e258-4050-4d7e-83c9-40b6dec81d33-webhook-certs\") pod \"multus-admission-controller-69db94689b-5lqg7\" (UID: \"7724e258-4050-4d7e-83c9-40b6dec81d33\") " pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884781 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884805 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwtgx\" (UniqueName: \"kubernetes.io/projected/27db15f2-d153-4ecb-beb5-b139549dcb36-kube-api-access-nwtgx\") pod \"cluster-samples-operator-6b564684c8-sprmn\" (UID: \"27db15f2-d153-4ecb-beb5-b139549dcb36\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884826 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/88acb60d-ae97-490e-bab2-b78f03e1b8c8-tmp-dir\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884853 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcj2\" (UniqueName: \"kubernetes.io/projected/61dc866d-abfc-4dea-a349-6635b614e189-kube-api-access-dfcj2\") pod \"package-server-manager-77f986bd66-vsgrz\" (UID: \"61dc866d-abfc-4dea-a349-6635b614e189\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.884915 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-stats-auth\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885063 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87mz\" (UniqueName: \"kubernetes.io/projected/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-kube-api-access-f87mz\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885379 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-policies\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885422 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zmp\" (UniqueName: \"kubernetes.io/projected/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-kube-api-access-w4zmp\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885455 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-serving-cert\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885480 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf878343-818d-4ca7-a3ce-507df55ae4c5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885509 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3da82db9-f242-4af1-83ef-d68599ce6c8d-metrics-tls\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885530 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-tmpfs\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885549 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-srv-cert\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885633 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20d42914-6ce8-4457-aa77-e01ef4fb9895-audit-dir\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885662 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885683 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4081cd08-5e12-4cca-bfd2-666bb6d87464-config\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885714 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88acb60d-ae97-490e-bab2-b78f03e1b8c8-serving-cert\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885739 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4081cd08-5e12-4cca-bfd2-666bb6d87464-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885756 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwnm\" (UniqueName: \"kubernetes.io/projected/88acb60d-ae97-490e-bab2-b78f03e1b8c8-kube-api-access-blwnm\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885771 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0575414-b1d1-44db-b352-6f101cce8c8f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885797 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-tmpfs\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885854 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885884 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba3de43-9844-4e15-b900-5a48bac6f058-serving-cert\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885958 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.885993 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886011 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886029 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5zsf\" (UniqueName: \"kubernetes.io/projected/20d42914-6ce8-4457-aa77-e01ef4fb9895-kube-api-access-k5zsf\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886045 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-client\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886140 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00c02264-3068-4287-a30a-13b0003bf5e1-installation-pull-secrets\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886330 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88da1299-0802-4745-8701-7de465542299-secret-volume\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886456 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-trusted-ca\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886492 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bx85\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-kube-api-access-5bx85\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886514 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886535 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf878343-818d-4ca7-a3ce-507df55ae4c5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886653 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-srv-cert\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886678 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-service-ca\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886716 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4081cd08-5e12-4cca-bfd2-666bb6d87464-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886735 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bs5p\" (UniqueName: \"kubernetes.io/projected/992000e3-50f4-48fa-8a55-58bfade85d0c-kube-api-access-7bs5p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-w2brd\" (UID: \"992000e3-50f4-48fa-8a55-58bfade85d0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886851 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-dir\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886886 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886937 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3da82db9-f242-4af1-83ef-d68599ce6c8d-tmp-dir\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.886984 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59096bb7-5757-4196-96a5-f14e967998e7-tmp\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887010 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf878343-818d-4ca7-a3ce-507df55ae4c5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887055 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-service-ca-bundle\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887088 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-tmp-dir\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887161 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887265 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8rh\" (UniqueName: \"kubernetes.io/projected/3da82db9-f242-4af1-83ef-d68599ce6c8d-kube-api-access-5m8rh\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887297 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-registry-tls\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887321 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0575414-b1d1-44db-b352-6f101cce8c8f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887360 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887391 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63e177b-5ff9-4662-be8b-4b193c72fc72-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887418 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-config\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887441 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxqd\" (UniqueName: \"kubernetes.io/projected/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-kube-api-access-ndxqd\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887465 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-bound-sa-token\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887485 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887525 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e63e177b-5ff9-4662-be8b-4b193c72fc72-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.887548 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftcp\" (UniqueName: \"kubernetes.io/projected/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-kube-api-access-6ftcp\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.988930 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:21 crc kubenswrapper[5106]: E0320 00:11:21.989216 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.489194698 +0000 UTC m=+136.922928882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989402 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-socket-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989442 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-tmpfs\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989473 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989498 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba3de43-9844-4e15-b900-5a48bac6f058-serving-cert\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989519 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989542 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989567 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989611 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5zsf\" (UniqueName: \"kubernetes.io/projected/20d42914-6ce8-4457-aa77-e01ef4fb9895-kube-api-access-k5zsf\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989634 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-client\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989657 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00c02264-3068-4287-a30a-13b0003bf5e1-installation-pull-secrets\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989683 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0d49bd21-508b-4161-9bef-e0bad55ee83b-ready\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989704 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-csi-data-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989729 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88da1299-0802-4745-8701-7de465542299-secret-volume\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989751 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d49bd21-508b-4161-9bef-e0bad55ee83b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989774 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-apiservice-cert\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989799 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-trusted-ca\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989821 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5bx85\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-kube-api-access-5bx85\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989844 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989870 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf878343-818d-4ca7-a3ce-507df55ae4c5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.989951 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de01ede-f866-4638-9351-ab1ef6392aba-config-volume\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990006 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-tmpfs\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990192 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-srv-cert\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990247 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-service-ca\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990313 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4081cd08-5e12-4cca-bfd2-666bb6d87464-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990340 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7bs5p\" (UniqueName: \"kubernetes.io/projected/992000e3-50f4-48fa-8a55-58bfade85d0c-kube-api-access-7bs5p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-w2brd\" (UID: \"992000e3-50f4-48fa-8a55-58bfade85d0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990373 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r925b\" (UniqueName: \"kubernetes.io/projected/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-kube-api-access-r925b\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990398 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-dir\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990430 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990455 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pwp\" (UniqueName: \"kubernetes.io/projected/969ecef5-bb59-4625-b72a-90db5ebb851c-kube-api-access-76pwp\") pod \"ingress-canary-q5tjt\" (UID: \"969ecef5-bb59-4625-b72a-90db5ebb851c\") " pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990487 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3da82db9-f242-4af1-83ef-d68599ce6c8d-tmp-dir\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990517 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59096bb7-5757-4196-96a5-f14e967998e7-tmp\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990542 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf878343-818d-4ca7-a3ce-507df55ae4c5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.990586 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-service-ca-bundle\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991202 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-webhook-cert\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991231 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f667eb1-070a-46f5-acb6-3532ff089720-node-bootstrap-token\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991261 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-tmp-dir\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991284 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f667eb1-070a-46f5-acb6-3532ff089720-certs\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991321 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991374 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8rh\" (UniqueName: \"kubernetes.io/projected/3da82db9-f242-4af1-83ef-d68599ce6c8d-kube-api-access-5m8rh\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991406 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qbjj\" (UniqueName: \"kubernetes.io/projected/0f667eb1-070a-46f5-acb6-3532ff089720-kube-api-access-8qbjj\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991438 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-registry-tls\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991464 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0575414-b1d1-44db-b352-6f101cce8c8f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991492 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991527 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63e177b-5ff9-4662-be8b-4b193c72fc72-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991549 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-config\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991731 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3da82db9-f242-4af1-83ef-d68599ce6c8d-tmp-dir\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992194 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndxqd\" (UniqueName: \"kubernetes.io/projected/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-kube-api-access-ndxqd\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992206 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-trusted-ca\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992240 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2de01ede-f866-4638-9351-ab1ef6392aba-tmp-dir\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992270 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992291 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992313 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhm8\" (UniqueName: \"kubernetes.io/projected/9662276f-9936-4ed0-a464-c509bbaaa7a0-kube-api-access-vhhm8\") pod \"downloads-747b44746d-ss8gd\" (UID: \"9662276f-9936-4ed0-a464-c509bbaaa7a0\") " pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992343 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-bound-sa-token\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992366 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992391 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-config\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992413 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-tmpfs\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992433 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-service-ca\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992443 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e63e177b-5ff9-4662-be8b-4b193c72fc72-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992504 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftcp\" (UniqueName: \"kubernetes.io/projected/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-kube-api-access-6ftcp\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992549 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-dir\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992666 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-images\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991318 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.991371 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf878343-818d-4ca7-a3ce-507df55ae4c5-config\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992956 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba3de43-9844-4e15-b900-5a48bac6f058-config\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.992989 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61dc866d-abfc-4dea-a349-6635b614e189-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-vsgrz\" (UID: \"61dc866d-abfc-4dea-a349-6635b614e189\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993029 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993053 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993080 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d49bd21-508b-4161-9bef-e0bad55ee83b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993103 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba3de43-9844-4e15-b900-5a48bac6f058-trusted-ca\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993127 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4kmk\" (UniqueName: \"kubernetes.io/projected/7724e258-4050-4d7e-83c9-40b6dec81d33-kube-api-access-l4kmk\") pod \"multus-admission-controller-69db94689b-5lqg7\" (UID: \"7724e258-4050-4d7e-83c9-40b6dec81d33\") " pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993161 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wd6nq\" (UniqueName: \"kubernetes.io/projected/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-kube-api-access-wd6nq\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993188 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-44gxl\" (UniqueName: \"kubernetes.io/projected/e63e177b-5ff9-4662-be8b-4b193c72fc72-kube-api-access-44gxl\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993215 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/992000e3-50f4-48fa-8a55-58bfade85d0c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-w2brd\" (UID: \"992000e3-50f4-48fa-8a55-58bfade85d0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993242 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-plugins-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.993278 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.994330 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59096bb7-5757-4196-96a5-f14e967998e7-tmp\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.994978 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.995036 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-images\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.995225 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cf878343-818d-4ca7-a3ce-507df55ae4c5-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.995560 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.995943 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-tmp-dir\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.996153 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.996432 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aba3de43-9844-4e15-b900-5a48bac6f058-serving-cert\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.996587 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0575414-b1d1-44db-b352-6f101cce8c8f-config\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.997187 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e63e177b-5ff9-4662-be8b-4b193c72fc72-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.997661 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-service-ca-bundle\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.997791 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-srv-cert\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998016 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998029 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998071 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998098 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lrdmn\" (UniqueName: \"kubernetes.io/projected/59096bb7-5757-4196-96a5-f14e967998e7-kube-api-access-lrdmn\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998430 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: E0320 00:11:21.998527 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.498436717 +0000 UTC m=+136.932170771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998628 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998596 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-trusted-ca-bundle\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998673 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998725 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-ca\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998769 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998801 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5lkc\" (UniqueName: \"kubernetes.io/projected/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-kube-api-access-h5lkc\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998811 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-config\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998862 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6lzx8\" (UniqueName: \"kubernetes.io/projected/88da1299-0802-4745-8701-7de465542299-kube-api-access-6lzx8\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998891 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998917 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999017 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/af8b1c72-0d76-40cc-9135-92bdefd2a461-kube-api-access-ml5xh\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999156 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2lnq\" (UniqueName: \"kubernetes.io/projected/c0575414-b1d1-44db-b352-6f101cce8c8f-kube-api-access-r2lnq\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.998021 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00c02264-3068-4287-a30a-13b0003bf5e1-installation-pull-secrets\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999294 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-ca\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999298 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b3abc0-a538-458d-9975-c9b1a373ee95-config\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999398 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-audit-policies\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999447 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7n2dj\" (UniqueName: \"kubernetes.io/projected/15d5d3b4-91df-49a0-9032-ebd865eacb5a-kube-api-access-7n2dj\") pod \"migrator-866fcbc849-h54ck\" (UID: \"15d5d3b4-91df-49a0-9032-ebd865eacb5a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999567 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00c02264-3068-4287-a30a-13b0003bf5e1-ca-trust-extracted\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999668 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-serving-cert\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999719 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-config\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999746 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rlzqr\" (UniqueName: \"kubernetes.io/projected/aba3de43-9844-4e15-b900-5a48bac6f058-kube-api-access-rlzqr\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999784 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-etcd-serving-ca\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:21 crc kubenswrapper[5106]: I0320 00:11:21.999936 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-audit-policies\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:21.999982 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b3abc0-a538-458d-9975-c9b1a373ee95-config\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000036 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00c02264-3068-4287-a30a-13b0003bf5e1-ca-trust-extracted\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000105 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b3abc0-a538-458d-9975-c9b1a373ee95-serving-cert\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000140 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpqw\" (UniqueName: \"kubernetes.io/projected/b07e83a4-1ea1-490a-8c95-49627f697ee0-kube-api-access-zbpqw\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000166 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-metrics-certs\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000200 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27db15f2-d153-4ecb-beb5-b139549dcb36-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sprmn\" (UID: \"27db15f2-d153-4ecb-beb5-b139549dcb36\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000226 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-registry-certificates\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000252 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88da1299-0802-4745-8701-7de465542299-config-volume\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000274 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-signing-cabundle\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000298 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-mountpoint-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000328 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e63e177b-5ff9-4662-be8b-4b193c72fc72-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000355 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4081cd08-5e12-4cca-bfd2-666bb6d87464-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000377 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-signing-key\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000413 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xczxt\" (UniqueName: \"kubernetes.io/projected/94b3abc0-a538-458d-9975-c9b1a373ee95-kube-api-access-xczxt\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000441 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000495 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-default-certificate\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000511 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-config\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000517 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf878343-818d-4ca7-a3ce-507df55ae4c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000597 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7vf4\" (UniqueName: \"kubernetes.io/projected/0d49bd21-508b-4161-9bef-e0bad55ee83b-kube-api-access-q7vf4\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000634 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000660 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-kube-api-access\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000687 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-registration-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000740 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-etcd-client\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000764 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-encryption-config\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000789 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7724e258-4050-4d7e-83c9-40b6dec81d33-webhook-certs\") pod \"multus-admission-controller-69db94689b-5lqg7\" (UID: \"7724e258-4050-4d7e-83c9-40b6dec81d33\") " pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000822 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000850 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nwtgx\" (UniqueName: \"kubernetes.io/projected/27db15f2-d153-4ecb-beb5-b139549dcb36-kube-api-access-nwtgx\") pod \"cluster-samples-operator-6b564684c8-sprmn\" (UID: \"27db15f2-d153-4ecb-beb5-b139549dcb36\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000875 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/88acb60d-ae97-490e-bab2-b78f03e1b8c8-tmp-dir\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000882 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/992000e3-50f4-48fa-8a55-58bfade85d0c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-w2brd\" (UID: \"992000e3-50f4-48fa-8a55-58bfade85d0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.000903 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcj2\" (UniqueName: \"kubernetes.io/projected/61dc866d-abfc-4dea-a349-6635b614e189-kube-api-access-dfcj2\") pod \"package-server-manager-77f986bd66-vsgrz\" (UID: \"61dc866d-abfc-4dea-a349-6635b614e189\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.001325 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lmh2\" (UniqueName: \"kubernetes.io/projected/2de01ede-f866-4638-9351-ab1ef6392aba-kube-api-access-4lmh2\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.001372 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqkxz\" (UniqueName: \"kubernetes.io/projected/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-kube-api-access-hqkxz\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.002257 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.002906 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88da1299-0802-4745-8701-7de465542299-config-volume\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.003698 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20d42914-6ce8-4457-aa77-e01ef4fb9895-etcd-serving-ca\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.004078 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aba3de43-9844-4e15-b900-5a48bac6f058-config\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.004397 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/61dc866d-abfc-4dea-a349-6635b614e189-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-vsgrz\" (UID: \"61dc866d-abfc-4dea-a349-6635b614e189\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.004422 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.004639 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-stats-auth\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.004715 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.005047 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f87mz\" (UniqueName: \"kubernetes.io/projected/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-kube-api-access-f87mz\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.005774 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/88acb60d-ae97-490e-bab2-b78f03e1b8c8-etcd-client\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.006262 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-policies\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.006333 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zmp\" (UniqueName: \"kubernetes.io/projected/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-kube-api-access-w4zmp\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.006635 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7724e258-4050-4d7e-83c9-40b6dec81d33-webhook-certs\") pod \"multus-admission-controller-69db94689b-5lqg7\" (UID: \"7724e258-4050-4d7e-83c9-40b6dec81d33\") " pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.006740 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-serving-cert\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.006801 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf878343-818d-4ca7-a3ce-507df55ae4c5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.006829 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/969ecef5-bb59-4625-b72a-90db5ebb851c-cert\") pod \"ingress-canary-q5tjt\" (UID: \"969ecef5-bb59-4625-b72a-90db5ebb851c\") " pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008032 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-policies\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008084 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/88acb60d-ae97-490e-bab2-b78f03e1b8c8-tmp-dir\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008214 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3da82db9-f242-4af1-83ef-d68599ce6c8d-metrics-tls\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008243 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-tmpfs\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008260 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-srv-cert\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008317 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20d42914-6ce8-4457-aa77-e01ef4fb9895-audit-dir\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008338 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2de01ede-f866-4638-9351-ab1ef6392aba-metrics-tls\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008375 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008406 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4081cd08-5e12-4cca-bfd2-666bb6d87464-config\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008425 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88acb60d-ae97-490e-bab2-b78f03e1b8c8-serving-cert\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.008761 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-tmpfs\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009055 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-registry-certificates\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009132 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20d42914-6ce8-4457-aa77-e01ef4fb9895-audit-dir\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009169 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-metrics-certs\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009189 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4081cd08-5e12-4cca-bfd2-666bb6d87464-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009403 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-blwnm\" (UniqueName: \"kubernetes.io/projected/88acb60d-ae97-490e-bab2-b78f03e1b8c8-kube-api-access-blwnm\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009453 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0575414-b1d1-44db-b352-6f101cce8c8f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009542 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4081cd08-5e12-4cca-bfd2-666bb6d87464-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.009729 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.010511 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aba3de43-9844-4e15-b900-5a48bac6f058-trusted-ca\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.010720 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b07e83a4-1ea1-490a-8c95-49627f697ee0-serving-cert\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.010811 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.011029 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4081cd08-5e12-4cca-bfd2-666bb6d87464-config\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.015895 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88acb60d-ae97-490e-bab2-b78f03e1b8c8-serving-cert\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.016100 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27db15f2-d153-4ecb-beb5-b139549dcb36-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sprmn\" (UID: \"27db15f2-d153-4ecb-beb5-b139549dcb36\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.016195 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88da1299-0802-4745-8701-7de465542299-secret-volume\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.016229 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-etcd-client\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.016361 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.016471 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-encryption-config\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.016602 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.017176 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf878343-818d-4ca7-a3ce-507df55ae4c5-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.017393 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b3abc0-a538-458d-9975-c9b1a373ee95-serving-cert\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.017571 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-serving-cert\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.017655 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-default-certificate\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.017935 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3da82db9-f242-4af1-83ef-d68599ce6c8d-metrics-tls\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.017969 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.018488 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-registry-tls\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.018859 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0575414-b1d1-44db-b352-6f101cce8c8f-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.018993 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-profile-collector-cert\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.019878 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-stats-auth\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.020226 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-srv-cert\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.023174 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e63e177b-5ff9-4662-be8b-4b193c72fc72-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.023262 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.023314 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20d42914-6ce8-4457-aa77-e01ef4fb9895-serving-cert\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.023647 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4081cd08-5e12-4cca-bfd2-666bb6d87464-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.030622 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.031822 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5zsf\" (UniqueName: \"kubernetes.io/projected/20d42914-6ce8-4457-aa77-e01ef4fb9895-kube-api-access-k5zsf\") pod \"apiserver-8596bd845d-8zd6p\" (UID: \"20d42914-6ce8-4457-aa77-e01ef4fb9895\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.055304 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bx85\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-kube-api-access-5bx85\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.071515 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4081cd08-5e12-4cca-bfd2-666bb6d87464-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-n7m7v\" (UID: \"4081cd08-5e12-4cca-bfd2-666bb6d87464\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.090975 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bs5p\" (UniqueName: \"kubernetes.io/projected/992000e3-50f4-48fa-8a55-58bfade85d0c-kube-api-access-7bs5p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-w2brd\" (UID: \"992000e3-50f4-48fa-8a55-58bfade85d0c\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.111951 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.112471 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.612431596 +0000 UTC m=+137.046165650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112597 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2de01ede-f866-4638-9351-ab1ef6392aba-metrics-tls\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112655 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b07e83a4-1ea1-490a-8c95-49627f697ee0-serving-cert\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112686 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-socket-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112746 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0d49bd21-508b-4161-9bef-e0bad55ee83b-ready\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112765 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-csi-data-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112795 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d49bd21-508b-4161-9bef-e0bad55ee83b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112820 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-apiservice-cert\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112851 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de01ede-f866-4638-9351-ab1ef6392aba-config-volume\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112906 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r925b\" (UniqueName: \"kubernetes.io/projected/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-kube-api-access-r925b\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112932 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76pwp\" (UniqueName: \"kubernetes.io/projected/969ecef5-bb59-4625-b72a-90db5ebb851c-kube-api-access-76pwp\") pod \"ingress-canary-q5tjt\" (UID: \"969ecef5-bb59-4625-b72a-90db5ebb851c\") " pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.112974 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-webhook-cert\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113003 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f667eb1-070a-46f5-acb6-3532ff089720-node-bootstrap-token\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113028 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f667eb1-070a-46f5-acb6-3532ff089720-certs\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113098 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8qbjj\" (UniqueName: \"kubernetes.io/projected/0f667eb1-070a-46f5-acb6-3532ff089720-kube-api-access-8qbjj\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113145 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2de01ede-f866-4638-9351-ab1ef6392aba-tmp-dir\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113166 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113189 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113230 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vhhm8\" (UniqueName: \"kubernetes.io/projected/9662276f-9936-4ed0-a464-c509bbaaa7a0-kube-api-access-vhhm8\") pod \"downloads-747b44746d-ss8gd\" (UID: \"9662276f-9936-4ed0-a464-c509bbaaa7a0\") " pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113259 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-config\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113282 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-tmpfs\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113328 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d49bd21-508b-4161-9bef-e0bad55ee83b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.113878 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2de01ede-f866-4638-9351-ab1ef6392aba-tmp-dir\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114028 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d49bd21-508b-4161-9bef-e0bad55ee83b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114276 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-socket-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114367 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0d49bd21-508b-4161-9bef-e0bad55ee83b-ready\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114472 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-csi-data-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114527 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d49bd21-508b-4161-9bef-e0bad55ee83b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114543 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-bound-sa-token\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.114843 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-plugins-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115096 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115134 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h5lkc\" (UniqueName: \"kubernetes.io/projected/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-kube-api-access-h5lkc\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115169 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-plugins-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115173 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zbpqw\" (UniqueName: \"kubernetes.io/projected/b07e83a4-1ea1-490a-8c95-49627f697ee0-kube-api-access-zbpqw\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115334 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-signing-cabundle\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115374 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-mountpoint-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115416 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-signing-key\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.115444 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.615430283 +0000 UTC m=+137.049164337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.115519 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-config\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.116364 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-tmpfs\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.116520 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-signing-cabundle\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.116750 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.117185 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-mountpoint-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.117233 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q7vf4\" (UniqueName: \"kubernetes.io/projected/0d49bd21-508b-4161-9bef-e0bad55ee83b-kube-api-access-q7vf4\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.117260 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-registration-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.117366 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4lmh2\" (UniqueName: \"kubernetes.io/projected/2de01ede-f866-4638-9351-ab1ef6392aba-kube-api-access-4lmh2\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.117455 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-registration-dir\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.118036 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hqkxz\" (UniqueName: \"kubernetes.io/projected/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-kube-api-access-hqkxz\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.118070 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-apiservice-cert\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.118160 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/969ecef5-bb59-4625-b72a-90db5ebb851c-cert\") pod \"ingress-canary-q5tjt\" (UID: \"969ecef5-bb59-4625-b72a-90db5ebb851c\") " pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.118533 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2de01ede-f866-4638-9351-ab1ef6392aba-metrics-tls\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.119073 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0f667eb1-070a-46f5-acb6-3532ff089720-certs\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.120477 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b07e83a4-1ea1-490a-8c95-49627f697ee0-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.120521 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-webhook-cert\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.120604 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2de01ede-f866-4638-9351-ab1ef6392aba-config-volume\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.121194 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0f667eb1-070a-46f5-acb6-3532ff089720-node-bootstrap-token\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.123156 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b07e83a4-1ea1-490a-8c95-49627f697ee0-serving-cert\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.129823 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-signing-key\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.132843 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/969ecef5-bb59-4625-b72a-90db5ebb851c-cert\") pod \"ingress-canary-q5tjt\" (UID: \"969ecef5-bb59-4625-b72a-90db5ebb851c\") " pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.137313 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.137491 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8rh\" (UniqueName: \"kubernetes.io/projected/3da82db9-f242-4af1-83ef-d68599ce6c8d-kube-api-access-5m8rh\") pod \"dns-operator-799b87ffcd-72hsj\" (UID: \"3da82db9-f242-4af1-83ef-d68599ce6c8d\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.150201 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndxqd\" (UniqueName: \"kubernetes.io/projected/8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4-kube-api-access-ndxqd\") pod \"catalog-operator-75ff9f647d-qgpkp\" (UID: \"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.169065 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e63e177b-5ff9-4662-be8b-4b193c72fc72-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.197319 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftcp\" (UniqueName: \"kubernetes.io/projected/d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c-kube-api-access-6ftcp\") pod \"olm-operator-5cdf44d969-8l78l\" (UID: \"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.208961 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.211542 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4kmk\" (UniqueName: \"kubernetes.io/projected/7724e258-4050-4d7e-83c9-40b6dec81d33-kube-api-access-l4kmk\") pod \"multus-admission-controller-69db94689b-5lqg7\" (UID: \"7724e258-4050-4d7e-83c9-40b6dec81d33\") " pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.221143 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.221373 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.721340003 +0000 UTC m=+137.155074067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.221757 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.222191 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.722180585 +0000 UTC m=+137.155914649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.234951 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd6nq\" (UniqueName: \"kubernetes.io/projected/c2b53a90-8ad7-40b2-b35a-2f35af352e6b-kube-api-access-wd6nq\") pod \"machine-config-controller-f9cdd68f7-jxtsx\" (UID: \"c2b53a90-8ad7-40b2-b35a-2f35af352e6b\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.236447 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.245332 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.263815 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.269591 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrdmn\" (UniqueName: \"kubernetes.io/projected/59096bb7-5757-4196-96a5-f14e967998e7-kube-api-access-lrdmn\") pod \"marketplace-operator-547dbd544d-xfn66\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.274693 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-44gxl\" (UniqueName: \"kubernetes.io/projected/e63e177b-5ff9-4662-be8b-4b193c72fc72-kube-api-access-44gxl\") pod \"ingress-operator-6b9cb4dbcf-smhtw\" (UID: \"e63e177b-5ff9-4662-be8b-4b193c72fc72\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.293221 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.300875 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.304024 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lzx8\" (UniqueName: \"kubernetes.io/projected/88da1299-0802-4745-8701-7de465542299-kube-api-access-6lzx8\") pod \"collect-profiles-29566080-tg7xz\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.310451 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.323332 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.323946 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.823926146 +0000 UTC m=+137.257660200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.332145 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2lnq\" (UniqueName: \"kubernetes.io/projected/c0575414-b1d1-44db-b352-6f101cce8c8f-kube-api-access-r2lnq\") pod \"kube-storage-version-migrator-operator-565b79b866-q54jx\" (UID: \"c0575414-b1d1-44db-b352-6f101cce8c8f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.340434 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/af8b1c72-0d76-40cc-9135-92bdefd2a461-kube-api-access-ml5xh\") pod \"oauth-openshift-66458b6674-zbpp6\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.379046 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n2dj\" (UniqueName: \"kubernetes.io/projected/15d5d3b4-91df-49a0-9032-ebd865eacb5a-kube-api-access-7n2dj\") pod \"migrator-866fcbc849-h54ck\" (UID: \"15d5d3b4-91df-49a0-9032-ebd865eacb5a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.380542 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlzqr\" (UniqueName: \"kubernetes.io/projected/aba3de43-9844-4e15-b900-5a48bac6f058-kube-api-access-rlzqr\") pod \"console-operator-67c89758df-qxnjl\" (UID: \"aba3de43-9844-4e15-b900-5a48bac6f058\") " pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.401212 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf878343-818d-4ca7-a3ce-507df55ae4c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-f42c8\" (UID: \"cf878343-818d-4ca7-a3ce-507df55ae4c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.428435 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.428753 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:22.928741488 +0000 UTC m=+137.362475542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.429084 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.431693 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.432050 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xczxt\" (UniqueName: \"kubernetes.io/projected/94b3abc0-a538-458d-9975-c9b1a373ee95-kube-api-access-xczxt\") pod \"service-ca-operator-5b9c976747-cnh84\" (UID: \"94b3abc0-a538-458d-9975-c9b1a373ee95\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.451066 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7aa4c8fd-ca13-4ef3-b774-d55fd525fe13-kube-api-access\") pod \"kube-apiserver-operator-575994946d-ncds5\" (UID: \"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.453700 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwtgx\" (UniqueName: \"kubernetes.io/projected/27db15f2-d153-4ecb-beb5-b139549dcb36-kube-api-access-nwtgx\") pod \"cluster-samples-operator-6b564684c8-sprmn\" (UID: \"27db15f2-d153-4ecb-beb5-b139549dcb36\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.457587 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.479735 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcj2\" (UniqueName: \"kubernetes.io/projected/61dc866d-abfc-4dea-a349-6635b614e189-kube-api-access-dfcj2\") pod \"package-server-manager-77f986bd66-vsgrz\" (UID: \"61dc866d-abfc-4dea-a349-6635b614e189\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.488100 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.496378 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.503734 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.514662 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87mz\" (UniqueName: \"kubernetes.io/projected/cb851d4f-eefc-463d-bf17-c8ac126bd7c1-kube-api-access-f87mz\") pod \"machine-config-operator-67c9d58cbb-hzg88\" (UID: \"cb851d4f-eefc-463d-bf17-c8ac126bd7c1\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.516821 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.522985 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.523437 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zmp\" (UniqueName: \"kubernetes.io/projected/a134eee6-8b26-4a27-8fbe-6fbc51787dc4-kube-api-access-w4zmp\") pod \"router-default-68cf44c8b8-vzb7m\" (UID: \"a134eee6-8b26-4a27-8fbe-6fbc51787dc4\") " pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.531744 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.531742 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.531865 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.031846395 +0000 UTC m=+137.465580449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.532775 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.533130 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.033121638 +0000 UTC m=+137.466855692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.558998 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p"] Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.559497 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.560096 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.568375 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwnm\" (UniqueName: \"kubernetes.io/projected/88acb60d-ae97-490e-bab2-b78f03e1b8c8-kube-api-access-blwnm\") pod \"etcd-operator-69b85846b6-t49vx\" (UID: \"88acb60d-ae97-490e-bab2-b78f03e1b8c8\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.587087 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.587810 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.587988 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.607429 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pwp\" (UniqueName: \"kubernetes.io/projected/969ecef5-bb59-4625-b72a-90db5ebb851c-kube-api-access-76pwp\") pod \"ingress-canary-q5tjt\" (UID: \"969ecef5-bb59-4625-b72a-90db5ebb851c\") " pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.624119 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhhm8\" (UniqueName: \"kubernetes.io/projected/9662276f-9936-4ed0-a464-c509bbaaa7a0-kube-api-access-vhhm8\") pod \"downloads-747b44746d-ss8gd\" (UID: \"9662276f-9936-4ed0-a464-c509bbaaa7a0\") " pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.624307 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qbjj\" (UniqueName: \"kubernetes.io/projected/0f667eb1-070a-46f5-acb6-3532ff089720-kube-api-access-8qbjj\") pod \"machine-config-server-8jsvn\" (UID: \"0f667eb1-070a-46f5-acb6-3532ff089720\") " pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.634082 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.634499 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.13448625 +0000 UTC m=+137.568220294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.651456 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.660376 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.667272 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r925b\" (UniqueName: \"kubernetes.io/projected/9613d763-dd08-4d6e-8cf3-ef60b7ef3211-kube-api-access-r925b\") pod \"service-ca-74545575db-2xds7\" (UID: \"9613d763-dd08-4d6e-8cf3-ef60b7ef3211\") " pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.669758 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbpqw\" (UniqueName: \"kubernetes.io/projected/b07e83a4-1ea1-490a-8c95-49627f697ee0-kube-api-access-zbpqw\") pod \"authentication-operator-7f5c659b84-v7lf8\" (UID: \"b07e83a4-1ea1-490a-8c95-49627f697ee0\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.696199 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q5tjt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.696909 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7vf4\" (UniqueName: \"kubernetes.io/projected/0d49bd21-508b-4161-9bef-e0bad55ee83b-kube-api-access-q7vf4\") pod \"cni-sysctl-allowlist-ds-hdf7z\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.696989 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5lkc\" (UniqueName: \"kubernetes.io/projected/f7dc02a2-0fb1-41e9-9d23-2565378a45a4-kube-api-access-h5lkc\") pod \"csi-hostpathplugin-8fdp6\" (UID: \"f7dc02a2-0fb1-41e9-9d23-2565378a45a4\") " pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.698982 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8jsvn" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.719337 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqkxz\" (UniqueName: \"kubernetes.io/projected/e6e237e2-c84f-498f-888a-4fdaa7af3eb8-kube-api-access-hqkxz\") pod \"packageserver-7d4fc7d867-lw4rt\" (UID: \"e6e237e2-c84f-498f-888a-4fdaa7af3eb8\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.736226 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.737895 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.237872624 +0000 UTC m=+137.671606678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.756968 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.767833 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lmh2\" (UniqueName: \"kubernetes.io/projected/2de01ede-f866-4638-9351-ab1ef6392aba-kube-api-access-4lmh2\") pod \"dns-default-dg59t\" (UID: \"2de01ede-f866-4638-9351-ab1ef6392aba\") " pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.771083 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29566080-czff6" event={"ID":"884b9b2b-1ff2-4758-b964-5030e8973573","Type":"ContainerStarted","Data":"1d07106e238e134efd8c3a707cf028575a634888be0f3a8cd3d9946829b42443"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.779861 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" event={"ID":"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38","Type":"ContainerStarted","Data":"b52828f9dc5580c60b2d55e439ac6b138baf5f0e19972535af21ac7d694359f8"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.780840 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.789053 5106 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-cp4kp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.789124 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.790791 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" event={"ID":"a134eee6-8b26-4a27-8fbe-6fbc51787dc4","Type":"ContainerStarted","Data":"8cdd830d48ac8ef0cfd27aea8a4416b601cf0db28cc05f8d6df5739a878c43ab"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.797970 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" event={"ID":"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63","Type":"ContainerStarted","Data":"dee7c9c1cdc2095a8facc69b6d75d9bf4aa876df4341c9ab6b98a8384a77652c"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.798007 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" event={"ID":"0ff8b42b-9c00-4f62-bc0c-4d14276cfb63","Type":"ContainerStarted","Data":"df5e36cfc50487ef6a43bd90399d296cae6e0b4784cb8af6e29196b2009086cb"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.815627 5106 generic.go:358] "Generic (PLEG): container finished" podID="4123c23b-ea73-40e1-965a-5b1777b4e2be" containerID="f62b4440232924f22224007c69cd12432a8bc790a447e718175eb15ee4681a05" exitCode=0 Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.815998 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" event={"ID":"4123c23b-ea73-40e1-965a-5b1777b4e2be","Type":"ContainerDied","Data":"f62b4440232924f22224007c69cd12432a8bc790a447e718175eb15ee4681a05"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.816032 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" event={"ID":"4123c23b-ea73-40e1-965a-5b1777b4e2be","Type":"ContainerStarted","Data":"205061e1ef5943e0f6701fd9f0b466eff3f0ac3066f2d1c397dd86253a28094a"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.838087 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.838522 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.338470026 +0000 UTC m=+137.772204080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.843798 5106 generic.go:358] "Generic (PLEG): container finished" podID="17377ffd-aa79-4dee-bfea-6ae6b3026fd1" containerID="f4e98e5dbee47c9a3b53303f7fd0d5d022b3f329c06984abf4808d225eba9e87" exitCode=0 Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.845318 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" event={"ID":"17377ffd-aa79-4dee-bfea-6ae6b3026fd1","Type":"ContainerDied","Data":"f4e98e5dbee47c9a3b53303f7fd0d5d022b3f329c06984abf4808d225eba9e87"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.845351 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" event={"ID":"17377ffd-aa79-4dee-bfea-6ae6b3026fd1","Type":"ContainerStarted","Data":"b8244c61fff1c339754201560b82c50d0b1cdcf11cbb4bf30fd8b98a4d5501d6"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.859868 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" event={"ID":"8539a810-4a95-4205-99c6-30b6362cfa01","Type":"ContainerStarted","Data":"2be1cee4bd91a76d9e347bc4fca7f5da503d437aebc51672c85d45bf07cfc987"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.860594 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.869599 5106 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-crd8g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.869657 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.874554 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" event={"ID":"2c10c417-d8e5-4933-96b6-3a365ea480f3","Type":"ContainerStarted","Data":"7434d2a8ec508e68ca6ee5a53fadd241f751fa28dd4740edc09917fa9fd44855"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.927430 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.936444 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-vx9v6" event={"ID":"74f7b3bf-429d-4b60-8b80-48300a789b1d","Type":"ContainerStarted","Data":"e8c2acd3ea961eb329b29e1bf4902677d2b31fb2fb550ed58f322a99f4fda4e7"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.944033 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-vx9v6" event={"ID":"74f7b3bf-429d-4b60-8b80-48300a789b1d","Type":"ContainerStarted","Data":"53f370a6438274e7a4c6592732b681ada0f8df96bef1d95f6e598a0913aa7320"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.944909 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.945375 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:22 crc kubenswrapper[5106]: E0320 00:11:22.946258 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.446245304 +0000 UTC m=+137.879979358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.949805 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-2xds7" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.968397 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" event={"ID":"2722b7b1-fe01-4f55-8114-86b441329659","Type":"ContainerStarted","Data":"b46ea16b321510de48fef3b9e752dcef5f4ecae997d48264a952436e4fc70204"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.968442 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" event={"ID":"2722b7b1-fe01-4f55-8114-86b441329659","Type":"ContainerStarted","Data":"25f02a9d56235d303215ba2cae282bcd4f9281fb20af1b5a86997ac2a39de69f"} Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.969046 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.984191 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:22 crc kubenswrapper[5106]: I0320 00:11:22.986604 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.047542 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.047729 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.547704868 +0000 UTC m=+137.981438922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.048283 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.048638 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.548630052 +0000 UTC m=+137.982364106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.153164 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.154013 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.653990718 +0000 UTC m=+138.087724782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.259369 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.259759 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.759744153 +0000 UTC m=+138.193478207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.286439 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx"] Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.286481 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-5lqg7"] Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.286494 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l"] Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.286505 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-72hsj"] Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.312022 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-8jhlx" podStartSLOduration=113.312002545 podStartE2EDuration="1m53.312002545s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:23.311911783 +0000 UTC m=+137.745645857" watchObservedRunningTime="2026-03-20 00:11:23.312002545 +0000 UTC m=+137.745736599" Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.361865 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.362108 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.862090081 +0000 UTC m=+138.295824135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.362568 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.363020 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.863003164 +0000 UTC m=+138.296737218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.433872 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-ngmrb" podStartSLOduration=113.433854977 podStartE2EDuration="1m53.433854977s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:23.433365004 +0000 UTC m=+137.867099058" watchObservedRunningTime="2026-03-20 00:11:23.433854977 +0000 UTC m=+137.867589031" Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.485789 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.486145 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:23.986122529 +0000 UTC m=+138.419856583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.587545 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.588188 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.088170239 +0000 UTC m=+138.521904293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: W0320 00:11:23.669555 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2b53a90_8ad7_40b2_b35a_2f35af352e6b.slice/crio-65192e63834ec73712695cfc9f275fb861f4f1092a88b88b467f732dea95ac43 WatchSource:0}: Error finding container 65192e63834ec73712695cfc9f275fb861f4f1092a88b88b467f732dea95ac43: Status 404 returned error can't find the container with id 65192e63834ec73712695cfc9f275fb861f4f1092a88b88b467f732dea95ac43 Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.689213 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.689633 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.189617913 +0000 UTC m=+138.623351967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: W0320 00:11:23.703846 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd943dab1_58e7_4aa6_b9ac_6f8a795f8c6c.slice/crio-1b766e578735b0dae84db19dfc53803bb46e678151960c0fb452256ab96de798 WatchSource:0}: Error finding container 1b766e578735b0dae84db19dfc53803bb46e678151960c0fb452256ab96de798: Status 404 returned error can't find the container with id 1b766e578735b0dae84db19dfc53803bb46e678151960c0fb452256ab96de798 Mar 20 00:11:23 crc kubenswrapper[5106]: W0320 00:11:23.729251 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7724e258_4050_4d7e_83c9_40b6dec81d33.slice/crio-6ba36f92f9753533b9a22504ef0de10057ebcd4d91328415b60836645519f842 WatchSource:0}: Error finding container 6ba36f92f9753533b9a22504ef0de10057ebcd4d91328415b60836645519f842: Status 404 returned error can't find the container with id 6ba36f92f9753533b9a22504ef0de10057ebcd4d91328415b60836645519f842 Mar 20 00:11:23 crc kubenswrapper[5106]: W0320 00:11:23.749017 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da82db9_f242_4af1_83ef_d68599ce6c8d.slice/crio-d7ca19d70d7d2c9f5c3e7a4fbf36d39d4b9baefd7c3a9ef796a7c5a4822767c8 WatchSource:0}: Error finding container d7ca19d70d7d2c9f5c3e7a4fbf36d39d4b9baefd7c3a9ef796a7c5a4822767c8: Status 404 returned error can't find the container with id d7ca19d70d7d2c9f5c3e7a4fbf36d39d4b9baefd7c3a9ef796a7c5a4822767c8 Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.790444 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.790884 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.290870862 +0000 UTC m=+138.724604916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.860693 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp"] Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.895369 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:23 crc kubenswrapper[5106]: E0320 00:11:23.895740 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.395723294 +0000 UTC m=+138.829457338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.984613 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" event={"ID":"0d49bd21-508b-4161-9bef-e0bad55ee83b","Type":"ContainerStarted","Data":"66c14bfa43a92f0157c21f210ed35caf0599532f61b59aef8474a92095f9586f"} Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.989705 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" event={"ID":"7724e258-4050-4d7e-83c9-40b6dec81d33","Type":"ContainerStarted","Data":"6ba36f92f9753533b9a22504ef0de10057ebcd4d91328415b60836645519f842"} Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.999228 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" event={"ID":"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c","Type":"ContainerStarted","Data":"1b766e578735b0dae84db19dfc53803bb46e678151960c0fb452256ab96de798"} Mar 20 00:11:23 crc kubenswrapper[5106]: I0320 00:11:23.999491 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.006230 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.499981111 +0000 UTC m=+138.933715165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.039285 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" event={"ID":"a134eee6-8b26-4a27-8fbe-6fbc51787dc4","Type":"ContainerStarted","Data":"17578875b734402cdff8423ebaec02b72a291fea87a5282aec79faba94d3cad0"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.089031 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8jsvn" event={"ID":"0f667eb1-070a-46f5-acb6-3532ff089720","Type":"ContainerStarted","Data":"6022c3c09153d0e74a1507660c8576537a4f1b2e6fc02faab6d53c7830cf0ad0"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.089076 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8jsvn" event={"ID":"0f667eb1-070a-46f5-acb6-3532ff089720","Type":"ContainerStarted","Data":"57343c9932b8634d92bd448ef6178808895489a1edd60595c5359d40bd9e55a1"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.093488 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" event={"ID":"c2b53a90-8ad7-40b2-b35a-2f35af352e6b","Type":"ContainerStarted","Data":"65192e63834ec73712695cfc9f275fb861f4f1092a88b88b467f732dea95ac43"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.095159 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" event={"ID":"3da82db9-f242-4af1-83ef-d68599ce6c8d","Type":"ContainerStarted","Data":"d7ca19d70d7d2c9f5c3e7a4fbf36d39d4b9baefd7c3a9ef796a7c5a4822767c8"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.107261 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v"] Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.109124 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" podStartSLOduration=114.109101464 podStartE2EDuration="1m54.109101464s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.106009284 +0000 UTC m=+138.539743358" watchObservedRunningTime="2026-03-20 00:11:24.109101464 +0000 UTC m=+138.542835518" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.109935 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.110132 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.61009713 +0000 UTC m=+139.043831184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.115115 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.116105 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.616090815 +0000 UTC m=+139.049824859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.177971 5106 generic.go:358] "Generic (PLEG): container finished" podID="20d42914-6ce8-4457-aa77-e01ef4fb9895" containerID="2383967b5fced5a0344718b531bff9dc7704e309df989782b102eff3eefd5c51" exitCode=0 Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.178789 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" event={"ID":"20d42914-6ce8-4457-aa77-e01ef4fb9895","Type":"ContainerDied","Data":"2383967b5fced5a0344718b531bff9dc7704e309df989782b102eff3eefd5c51"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.178815 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" event={"ID":"20d42914-6ce8-4457-aa77-e01ef4fb9895","Type":"ContainerStarted","Data":"fc365244e0a516802bd8b8d3b28c9cf2666e69d685084aa8b78da5fbaba511c6"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.199046 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" event={"ID":"4123c23b-ea73-40e1-965a-5b1777b4e2be","Type":"ContainerStarted","Data":"eec5727b3a96fda0ac31619e0b19261eb7074d7d28d6f18bed62cef0b6a8bff6"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.199130 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.221909 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.222631 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.722603319 +0000 UTC m=+139.156337373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.236916 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" event={"ID":"17377ffd-aa79-4dee-bfea-6ae6b3026fd1","Type":"ContainerStarted","Data":"d4b9b163ad704366f691ab032d63cb7bf6d120cc76961a644dfbfc78ba363908"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.242504 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" event={"ID":"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4","Type":"ContainerStarted","Data":"b30651442e9a70b27b5aa5adf0682fdd809291c2c90d3cbfff2908ac9ba06338"} Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.313396 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" podStartSLOduration=114.313324046 podStartE2EDuration="1m54.313324046s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.27639231 +0000 UTC m=+138.710126354" watchObservedRunningTime="2026-03-20 00:11:24.313324046 +0000 UTC m=+138.747058100" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.329764 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-x2xbl" podStartSLOduration=114.32974618 podStartE2EDuration="1m54.32974618s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.303623195 +0000 UTC m=+138.737357249" watchObservedRunningTime="2026-03-20 00:11:24.32974618 +0000 UTC m=+138.763480234" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.331177 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.331532 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.831515476 +0000 UTC m=+139.265249530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.344824 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podStartSLOduration=114.34481101 podStartE2EDuration="1m54.34481101s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.343760593 +0000 UTC m=+138.777494647" watchObservedRunningTime="2026-03-20 00:11:24.34481101 +0000 UTC m=+138.778545064" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.357301 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29566080-czff6" podStartSLOduration=114.357285323 podStartE2EDuration="1m54.357285323s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.356714548 +0000 UTC m=+138.790448612" watchObservedRunningTime="2026-03-20 00:11:24.357285323 +0000 UTC m=+138.791019377" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.385148 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-fdqzr" podStartSLOduration=114.385135413 podStartE2EDuration="1m54.385135413s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.384767304 +0000 UTC m=+138.818501358" watchObservedRunningTime="2026-03-20 00:11:24.385135413 +0000 UTC m=+138.818869457" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.445848 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.446699 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:24.946684275 +0000 UTC m=+139.380418319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.471711 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jbnnd" podStartSLOduration=114.471688202 podStartE2EDuration="1m54.471688202s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.469498085 +0000 UTC m=+138.903232139" watchObservedRunningTime="2026-03-20 00:11:24.471688202 +0000 UTC m=+138.905422256" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.524511 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-vx9v6" podStartSLOduration=114.524491388 podStartE2EDuration="1m54.524491388s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.522277531 +0000 UTC m=+138.956011585" watchObservedRunningTime="2026-03-20 00:11:24.524491388 +0000 UTC m=+138.958225442" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.524593 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48352: no serving certificate available for the kubelet" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.532568 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.547653 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.548248 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.048232362 +0000 UTC m=+139.481966416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.555758 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8jsvn" podStartSLOduration=5.555366297 podStartE2EDuration="5.555366297s" podCreationTimestamp="2026-03-20 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.551160688 +0000 UTC m=+138.984894742" watchObservedRunningTime="2026-03-20 00:11:24.555366297 +0000 UTC m=+138.989100351" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.610235 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48358: no serving certificate available for the kubelet" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.631184 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" podStartSLOduration=114.631167177 podStartE2EDuration="1m54.631167177s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:24.58913457 +0000 UTC m=+139.022868634" watchObservedRunningTime="2026-03-20 00:11:24.631167177 +0000 UTC m=+139.064901231" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.649052 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.649280 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.149237205 +0000 UTC m=+139.582971269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.649513 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.650052 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.150037965 +0000 UTC m=+139.583772019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.714230 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48372: no serving certificate available for the kubelet" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.720981 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:24 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:24 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:24 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.721018 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.735328 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.752374 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.752914 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.252898676 +0000 UTC m=+139.686632720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.822332 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48386: no serving certificate available for the kubelet" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.854598 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.854942 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.354929965 +0000 UTC m=+139.788664019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.864359 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.925124 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48390: no serving certificate available for the kubelet" Mar 20 00:11:24 crc kubenswrapper[5106]: I0320 00:11:24.959067 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:24 crc kubenswrapper[5106]: E0320 00:11:24.959588 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.459556592 +0000 UTC m=+139.893290646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.033625 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48402: no serving certificate available for the kubelet" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.066961 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.067273 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.567261468 +0000 UTC m=+140.000995522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.167880 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.168210 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.668184448 +0000 UTC m=+140.101918502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.168551 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.169344 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.669331708 +0000 UTC m=+140.103065762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.221112 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48408: no serving certificate available for the kubelet" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.246874 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" event={"ID":"4081cd08-5e12-4cca-bfd2-666bb6d87464","Type":"ContainerStarted","Data":"0cfe7f2cd151c56eca8c8fcde305fa4360be4a289271b7c370cf962139e9d1ea"} Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.248122 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" event={"ID":"0d49bd21-508b-4161-9bef-e0bad55ee83b","Type":"ContainerStarted","Data":"6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e"} Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.248830 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.250785 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" event={"ID":"d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c","Type":"ContainerStarted","Data":"e626a36bf2eb63b9949ad33614a07cc216bd3c6489812835557c95bccb9749f2"} Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.251222 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.254217 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" event={"ID":"c2b53a90-8ad7-40b2-b35a-2f35af352e6b","Type":"ContainerStarted","Data":"c22149761e0ec3f284f7f2c7a1ca54b5fe85a5e15dc4c46fe502fdcddedc739f"} Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.258052 5106 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-8l78l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.258125 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" podUID="d943dab1-58e7-4aa6-b9ac-6f8a795f8c6c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.267597 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" podStartSLOduration=6.267563949 podStartE2EDuration="6.267563949s" podCreationTimestamp="2026-03-20 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:25.265074365 +0000 UTC m=+139.698808419" watchObservedRunningTime="2026-03-20 00:11:25.267563949 +0000 UTC m=+139.701298003" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.269840 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.270338 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.77031947 +0000 UTC m=+140.204053524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.273494 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.282798 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" podStartSLOduration=115.282776633 podStartE2EDuration="1m55.282776633s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:25.281595492 +0000 UTC m=+139.715329566" watchObservedRunningTime="2026-03-20 00:11:25.282776633 +0000 UTC m=+139.716510687" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.301062 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q5tjt"] Mar 20 00:11:25 crc kubenswrapper[5106]: W0320 00:11:25.311139 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod969ecef5_bb59_4625_b72a_90db5ebb851c.slice/crio-1a6b742a706bdc3089b39f8ba3a28d2838b1335dd4e760c0ed98d4e569b4b8ac WatchSource:0}: Error finding container 1a6b742a706bdc3089b39f8ba3a28d2838b1335dd4e760c0ed98d4e569b4b8ac: Status 404 returned error can't find the container with id 1a6b742a706bdc3089b39f8ba3a28d2838b1335dd4e760c0ed98d4e569b4b8ac Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.325067 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.370002 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.371342 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.371835 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.871818336 +0000 UTC m=+140.305552430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.455103 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.458029 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.468145 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.472232 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.472731 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qxnjl"] Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.472952 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:25.972930851 +0000 UTC m=+140.406664905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.473665 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.475377 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.485946 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ss8gd"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.501705 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.509105 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.523920 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.535311 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-zbpp6"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.537630 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:25 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:25 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:25 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.537689 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.546280 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-2xds7"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.549610 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xfn66"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.556484 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8fdp6"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.560790 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dg59t"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.562337 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.567825 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.568595 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48418: no serving certificate available for the kubelet" Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.574499 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.575015 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.075001392 +0000 UTC m=+140.508735446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: W0320 00:11:25.626737 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2de01ede_f866_4638_9351_ab1ef6392aba.slice/crio-e8f774fe235c1df57a66e275314d3c4e80350d0d294aa77a18db08abd16b2cef WatchSource:0}: Error finding container e8f774fe235c1df57a66e275314d3c4e80350d0d294aa77a18db08abd16b2cef: Status 404 returned error can't find the container with id e8f774fe235c1df57a66e275314d3c4e80350d0d294aa77a18db08abd16b2cef Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.674520 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-t49vx"] Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.675107 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.675244 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.175226514 +0000 UTC m=+140.608960568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.675410 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.675796 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.175789269 +0000 UTC m=+140.609523323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.705804 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx"] Mar 20 00:11:25 crc kubenswrapper[5106]: W0320 00:11:25.755853 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88acb60d_ae97_490e_bab2_b78f03e1b8c8.slice/crio-f674925808dc72362ff1e19567d856b6d84a014518e7dd53def80336780b952b WatchSource:0}: Error finding container f674925808dc72362ff1e19567d856b6d84a014518e7dd53def80336780b952b: Status 404 returned error can't find the container with id f674925808dc72362ff1e19567d856b6d84a014518e7dd53def80336780b952b Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.780298 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.780589 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.280550768 +0000 UTC m=+140.714284822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.780771 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.781042 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.281026031 +0000 UTC m=+140.714760085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.881819 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.881997 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.381967282 +0000 UTC m=+140.815701336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:25 crc kubenswrapper[5106]: I0320 00:11:25.983333 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:25 crc kubenswrapper[5106]: E0320 00:11:25.983808 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.483785566 +0000 UTC m=+140.917519620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.084978 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.085204 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.585172698 +0000 UTC m=+141.018906762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.122174 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hdf7z"] Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.186213 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.186556 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.68654338 +0000 UTC m=+141.120277434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.266379 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48428: no serving certificate available for the kubelet" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.287113 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.287475 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.787457491 +0000 UTC m=+141.221191545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.303951 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" event={"ID":"4081cd08-5e12-4cca-bfd2-666bb6d87464","Type":"ContainerStarted","Data":"13745c3ba108256d5e5268c9037a109dfdc6de1b09da518f9f68015bd58b8e9d"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.308360 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" event={"ID":"20d42914-6ce8-4457-aa77-e01ef4fb9895","Type":"ContainerStarted","Data":"e6e083e591317d94d3e29bef140663af68392dc6cc193b6890b9700f12763f84"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.312764 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ss8gd" event={"ID":"9662276f-9936-4ed0-a464-c509bbaaa7a0","Type":"ContainerStarted","Data":"bbf670bc57a8949f7f83a47e63ef36968060509542f98904b5600153628a4dcf"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.312798 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ss8gd" event={"ID":"9662276f-9936-4ed0-a464-c509bbaaa7a0","Type":"ContainerStarted","Data":"cf8ba3f43c7598754c5098987047af781ab070548ea6f69d37d2fa8824eb69d8"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.313409 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.316797 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" event={"ID":"17377ffd-aa79-4dee-bfea-6ae6b3026fd1","Type":"ContainerStarted","Data":"ee60f4269aaca76fd21839bf2339acd934c7d11eaa610ea5564cdad5129a37e7"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.318377 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" event={"ID":"8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4","Type":"ContainerStarted","Data":"4cf21d5bcb5086c26701b18d1138e227fc40163d425badf752e5b9e97bbb1ffc"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.319108 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.340455 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.340497 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.340699 5106 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-qgpkp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.340792 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" podUID="8f2d5bc7-aa74-4ad2-93bd-f549dccf69d4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.345705 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" event={"ID":"af8b1c72-0d76-40cc-9135-92bdefd2a461","Type":"ContainerStarted","Data":"8067bb27876a0cc450f1dc3d5cf29040cf4f3906dc95582bfd1a5c0b9c6e1526"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.356173 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-n7m7v" podStartSLOduration=116.356163548 podStartE2EDuration="1m56.356163548s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.335065832 +0000 UTC m=+140.768799886" watchObservedRunningTime="2026-03-20 00:11:26.356163548 +0000 UTC m=+140.789897602" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.356238 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-ss8gd" podStartSLOduration=116.3562358 podStartE2EDuration="1m56.3562358s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.354831284 +0000 UTC m=+140.788565338" watchObservedRunningTime="2026-03-20 00:11:26.3562358 +0000 UTC m=+140.789969854" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.356807 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" event={"ID":"b07e83a4-1ea1-490a-8c95-49627f697ee0","Type":"ContainerStarted","Data":"13483830c099a6c8f6796a345f6f4263b6b45e108e3a8973e296c948758a0532"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.373459 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" event={"ID":"88da1299-0802-4745-8701-7de465542299","Type":"ContainerStarted","Data":"8d156bc0fb97314d5e1ebf2b69675d2012547fd513633de185f57f44179855e2"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.373528 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" event={"ID":"88da1299-0802-4745-8701-7de465542299","Type":"ContainerStarted","Data":"8a85498319c3c126fd820310d3bbc7aca96e6e06d5709aa7b5190c0b851d6eea"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.395269 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.397683 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.897670462 +0000 UTC m=+141.331404516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.433496 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" event={"ID":"cf878343-818d-4ca7-a3ce-507df55ae4c5","Type":"ContainerStarted","Data":"cf2ae9f2a419c5449ca59001d885a13f02701825d086f68131fa5a73c7358ab6"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.436340 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" event={"ID":"aba3de43-9844-4e15-b900-5a48bac6f058","Type":"ContainerStarted","Data":"3b5ced058f9e26943b996095a7ee6b2660e15d57c2af10c4e3a5553ed67cce56"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.436369 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" event={"ID":"aba3de43-9844-4e15-b900-5a48bac6f058","Type":"ContainerStarted","Data":"70207eab4dd0309d9f2530e2b4a8f568f917c928eacbc67bac597bd4545195c5"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.437776 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.446528 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" podStartSLOduration=116.446517045 podStartE2EDuration="1m56.446517045s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.445988031 +0000 UTC m=+140.879722085" watchObservedRunningTime="2026-03-20 00:11:26.446517045 +0000 UTC m=+140.880251099" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.446667 5106 patch_prober.go:28] interesting pod/console-operator-67c89758df-qxnjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.446690 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" podStartSLOduration=116.44668669 podStartE2EDuration="1m56.44668669s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.411081518 +0000 UTC m=+140.844815572" watchObservedRunningTime="2026-03-20 00:11:26.44668669 +0000 UTC m=+140.880420744" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.446720 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" podUID="aba3de43-9844-4e15-b900-5a48bac6f058" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.458105 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.459507 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" event={"ID":"c2b53a90-8ad7-40b2-b35a-2f35af352e6b","Type":"ContainerStarted","Data":"43b23b31915df176491f9e71fd38b227d911e8a5bef909efba11d8f66be0693b"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.463693 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.465178 5106 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-wqms8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.465230 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" podUID="17377ffd-aa79-4dee-bfea-6ae6b3026fd1" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.19:8443/livez\": dial tcp 10.217.0.19:8443: connect: connection refused" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.472796 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" event={"ID":"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13","Type":"ContainerStarted","Data":"e6ba32cb28747ac6fe4b78f2a9439aa4e9c9af7770cf565b81f113e9ab5875ac"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.472844 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" event={"ID":"7aa4c8fd-ca13-4ef3-b774-d55fd525fe13","Type":"ContainerStarted","Data":"ac358186d7a3f86693013258160977c656242089636219f0c33366facd35e58a"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.486686 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" event={"ID":"e63e177b-5ff9-4662-be8b-4b193c72fc72","Type":"ContainerStarted","Data":"0078cfb97ea0c4d0dd2979049819eec199583a78c4864cc9924b338392fcc69a"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.489618 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" event={"ID":"992000e3-50f4-48fa-8a55-58bfade85d0c","Type":"ContainerStarted","Data":"519f76418d1c4bfc4dfa5c6addbb750074b5392d39b3f9aa36c408741dc62e46"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.489643 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" event={"ID":"992000e3-50f4-48fa-8a55-58bfade85d0c","Type":"ContainerStarted","Data":"9ff6b056ba48f838e9dbab37cafede0ed5db871eb08318114eab7fbddb9a4763"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.493148 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" event={"ID":"3da82db9-f242-4af1-83ef-d68599ce6c8d","Type":"ContainerStarted","Data":"b807a4878588eeae49d9fb2236bbf7558784c149f4272cf7d5b655b6a87ab4d4"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.496394 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.496864 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:26.996849037 +0000 UTC m=+141.430583091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.497927 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.502106 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" podStartSLOduration=116.502090813 podStartE2EDuration="1m56.502090813s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.49927361 +0000 UTC m=+140.933007664" watchObservedRunningTime="2026-03-20 00:11:26.502090813 +0000 UTC m=+140.935824857" Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.503209 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.003192401 +0000 UTC m=+141.436926455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.507369 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" event={"ID":"cb851d4f-eefc-463d-bf17-c8ac126bd7c1","Type":"ContainerStarted","Data":"dfd2eb50133ed3d585c3d49230f365a4fea8c6cbaee3773e90a2f3d211ef439c"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.507417 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" event={"ID":"cb851d4f-eefc-463d-bf17-c8ac126bd7c1","Type":"ContainerStarted","Data":"8327f32a5e217869730d667aabe27334abc53e8a1eee824ee83c885210f94ae6"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.530186 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" podStartSLOduration=116.530169839 podStartE2EDuration="1m56.530169839s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.522378587 +0000 UTC m=+140.956112641" watchObservedRunningTime="2026-03-20 00:11:26.530169839 +0000 UTC m=+140.963903893" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.538615 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:26 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:26 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:26 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.538664 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.539627 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-ncds5" podStartSLOduration=116.539619253 podStartE2EDuration="1m56.539619253s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.537978621 +0000 UTC m=+140.971712665" watchObservedRunningTime="2026-03-20 00:11:26.539619253 +0000 UTC m=+140.973353307" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.539687 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" event={"ID":"94b3abc0-a538-458d-9975-c9b1a373ee95","Type":"ContainerStarted","Data":"c04950fee22b31c9cfc3a78b33fac3e7c6ca7f7adae502dd2d67a82f8091a958"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.539737 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" event={"ID":"94b3abc0-a538-458d-9975-c9b1a373ee95","Type":"ContainerStarted","Data":"fb301696773d3b5340e1b373e713a2d2d13a1132324b81ce94da13cd9ee3698a"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.547100 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q5tjt" event={"ID":"969ecef5-bb59-4625-b72a-90db5ebb851c","Type":"ContainerStarted","Data":"88578484f14471352e7a0315b1cfd86c080582f048038334c4173e21ec2227c2"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.547136 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q5tjt" event={"ID":"969ecef5-bb59-4625-b72a-90db5ebb851c","Type":"ContainerStarted","Data":"1a6b742a706bdc3089b39f8ba3a28d2838b1335dd4e760c0ed98d4e569b4b8ac"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.551222 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" event={"ID":"15d5d3b4-91df-49a0-9032-ebd865eacb5a","Type":"ContainerStarted","Data":"cb90b55793d6068ec317f168f2944165b19895d518897c400d5a1e5430567180"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.551246 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" event={"ID":"15d5d3b4-91df-49a0-9032-ebd865eacb5a","Type":"ContainerStarted","Data":"b70f8247341a0b52f41912a5497265bc81f3442ead1701ccf967309f8fac66d4"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.554162 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" event={"ID":"61dc866d-abfc-4dea-a349-6635b614e189","Type":"ContainerStarted","Data":"e6d9e21ab6ea9658f5d1975ef6be58a5e071e285025644810655208013d1361d"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.571856 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" event={"ID":"59096bb7-5757-4196-96a5-f14e967998e7","Type":"ContainerStarted","Data":"1a7702b74517303943adabb2ff5993398f40d095b0d97c5bff706c52e8c2477d"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.573447 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" podStartSLOduration=115.573430588 podStartE2EDuration="1m55.573430588s" podCreationTimestamp="2026-03-20 00:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.572134865 +0000 UTC m=+141.005868919" watchObservedRunningTime="2026-03-20 00:11:26.573430588 +0000 UTC m=+141.007164642" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.601405 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.601892 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.101871384 +0000 UTC m=+141.535605438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.602037 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.604891 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.104875331 +0000 UTC m=+141.538609385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.612791 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-jxtsx" podStartSLOduration=116.612778106 podStartE2EDuration="1m56.612778106s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.611134853 +0000 UTC m=+141.044868907" watchObservedRunningTime="2026-03-20 00:11:26.612778106 +0000 UTC m=+141.046512160" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.665113 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" event={"ID":"7724e258-4050-4d7e-83c9-40b6dec81d33","Type":"ContainerStarted","Data":"23a372973bbbb6b20854b729464b7822199e9d6319ee34fe4abe8e5f9569563d"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.665147 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" event={"ID":"7724e258-4050-4d7e-83c9-40b6dec81d33","Type":"ContainerStarted","Data":"70a586445c02240f68384cc93d9fb55ad6888b5e75475229cc947764b1b49e82"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.695049 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q5tjt" podStartSLOduration=7.695031174 podStartE2EDuration="7.695031174s" podCreationTimestamp="2026-03-20 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.693650298 +0000 UTC m=+141.127384362" watchObservedRunningTime="2026-03-20 00:11:26.695031174 +0000 UTC m=+141.128765228" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.695758 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-w2brd" podStartSLOduration=116.695751432 podStartE2EDuration="1m56.695751432s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.652182735 +0000 UTC m=+141.085916789" watchObservedRunningTime="2026-03-20 00:11:26.695751432 +0000 UTC m=+141.129485486" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.700187 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-2xds7" event={"ID":"9613d763-dd08-4d6e-8cf3-ef60b7ef3211","Type":"ContainerStarted","Data":"23ef30c62aac194eb31e186ea6bcd1557ab2ad412c074f2588ea19b3860e78a1"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.705953 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.707176 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.207161707 +0000 UTC m=+141.640895751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.726055 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" event={"ID":"f7dc02a2-0fb1-41e9-9d23-2565378a45a4","Type":"ContainerStarted","Data":"8121ec8d096ca9d532b1b6e7f0370a0da0421f4cc4893b2edd3a2b08c06d1db0"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.811083 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-cnh84" podStartSLOduration=116.811065275 podStartE2EDuration="1m56.811065275s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.810962832 +0000 UTC m=+141.244696886" watchObservedRunningTime="2026-03-20 00:11:26.811065275 +0000 UTC m=+141.244799329" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.811895 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.813314 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.313300513 +0000 UTC m=+141.747034557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.834951 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" event={"ID":"88acb60d-ae97-490e-bab2-b78f03e1b8c8","Type":"ContainerStarted","Data":"f674925808dc72362ff1e19567d856b6d84a014518e7dd53def80336780b952b"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.883687 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" event={"ID":"c0575414-b1d1-44db-b352-6f101cce8c8f","Type":"ContainerStarted","Data":"e3fbd5b94c894b79e71c8c2c946bb21e8dab0d37b73fe44024d128674a59aab0"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.955894 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-5lqg7" podStartSLOduration=116.955872331 podStartE2EDuration="1m56.955872331s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.892006809 +0000 UTC m=+141.325740863" watchObservedRunningTime="2026-03-20 00:11:26.955872331 +0000 UTC m=+141.389606385" Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.962846 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dg59t" event={"ID":"2de01ede-f866-4638-9351-ab1ef6392aba","Type":"ContainerStarted","Data":"e8f774fe235c1df57a66e275314d3c4e80350d0d294aa77a18db08abd16b2cef"} Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.963792 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.963862 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.463843657 +0000 UTC m=+141.897577711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:26 crc kubenswrapper[5106]: I0320 00:11:26.977633 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:26 crc kubenswrapper[5106]: E0320 00:11:26.978059 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.478039804 +0000 UTC m=+141.911773858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.002666 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" event={"ID":"e6e237e2-c84f-498f-888a-4fdaa7af3eb8","Type":"ContainerStarted","Data":"fc9d913ad09f1dcb07a22391d9e54257c62ef5f895b38dd0cb8b98eb630e237a"} Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.006943 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.013384 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" event={"ID":"27db15f2-d153-4ecb-beb5-b139549dcb36","Type":"ContainerStarted","Data":"345a120c6addd1ec767e1bb4eec692b56983944ea0570e3b2a1c11da32f0e764"} Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.014004 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" event={"ID":"27db15f2-d153-4ecb-beb5-b139549dcb36","Type":"ContainerStarted","Data":"0edf2538069b5bc903bc4ed1352c53bd356535ce233e42a749101a2c81bb0b35"} Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.034928 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-8l78l" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.055216 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" podStartSLOduration=117.05519856 podStartE2EDuration="1m57.05519856s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:27.048920058 +0000 UTC m=+141.482654102" watchObservedRunningTime="2026-03-20 00:11:27.05519856 +0000 UTC m=+141.488932614" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.055774 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-2xds7" podStartSLOduration=117.055769805 podStartE2EDuration="1m57.055769805s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:26.978143857 +0000 UTC m=+141.411877911" watchObservedRunningTime="2026-03-20 00:11:27.055769805 +0000 UTC m=+141.489503849" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.061708 5106 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-lw4rt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.061771 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" podUID="e6e237e2-c84f-498f-888a-4fdaa7af3eb8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.081560 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.083099 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.583073651 +0000 UTC m=+142.016807715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.151732 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.152014 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.177773 5106 patch_prober.go:28] interesting pod/apiserver-8596bd845d-8zd6p container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.177853 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" podUID="20d42914-6ce8-4457-aa77-e01ef4fb9895" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.29:8443/livez\": dial tcp 10.217.0.29:8443: connect: connection refused" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.184854 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.185207 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.685193963 +0000 UTC m=+142.118928017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.286017 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.286421 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.786404791 +0000 UTC m=+142.220138845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.391367 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.391725 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:27.891708834 +0000 UTC m=+142.325442888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.501028 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.501840 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.001822983 +0000 UTC m=+142.435557037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.552161 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:27 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:27 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:27 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.552209 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.602530 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.602863 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.102847756 +0000 UTC m=+142.536581810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.630318 5106 ???:1] "http: TLS handshake error from 192.168.126.11:48440: no serving certificate available for the kubelet" Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.703204 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.703656 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.203640273 +0000 UTC m=+142.637374327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.805171 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.805918 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.305896508 +0000 UTC m=+142.739630562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.906745 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.906769 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.406750786 +0000 UTC m=+142.840484840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:27 crc kubenswrapper[5106]: I0320 00:11:27.907155 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:27 crc kubenswrapper[5106]: E0320 00:11:27.907654 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.407645359 +0000 UTC m=+142.841379413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.011998 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.012221 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.512193774 +0000 UTC m=+142.945927828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.012423 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.012790 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.512775329 +0000 UTC m=+142.946509383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.033549 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" event={"ID":"e63e177b-5ff9-4662-be8b-4b193c72fc72","Type":"ContainerStarted","Data":"ad75706747914db2dc11ecaf543d71c55783edadc576777303d4cbea7d2aa15a"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.033617 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" event={"ID":"e63e177b-5ff9-4662-be8b-4b193c72fc72","Type":"ContainerStarted","Data":"df6bd4b71580a6aef66fbc5c8ececebd9182c97010eb501e096bfb77dfcdef3f"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.038715 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" event={"ID":"3da82db9-f242-4af1-83ef-d68599ce6c8d","Type":"ContainerStarted","Data":"7194464bd9dc2edb3bbfe65465500909bc184164cfb54978223abac25f9377c5"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.058219 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-smhtw" podStartSLOduration=118.058203384 podStartE2EDuration="1m58.058203384s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.05648086 +0000 UTC m=+142.490214924" watchObservedRunningTime="2026-03-20 00:11:28.058203384 +0000 UTC m=+142.491937438" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.064036 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" event={"ID":"cb851d4f-eefc-463d-bf17-c8ac126bd7c1","Type":"ContainerStarted","Data":"41208ccc11f7ad72b09970cd686cdc8825a1a58aa54a6116aa2888231e62268d"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.113099 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.113322 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.613299079 +0000 UTC m=+143.047033133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.113535 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.114022 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.614006207 +0000 UTC m=+143.047740261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.114668 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" event={"ID":"15d5d3b4-91df-49a0-9032-ebd865eacb5a","Type":"ContainerStarted","Data":"edebae09602f0e26c2adf786afea7a1d7edfb8ccf019a24b2cec236d6a690ee5"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.130036 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-72hsj" podStartSLOduration=118.130017452 podStartE2EDuration="1m58.130017452s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.128490272 +0000 UTC m=+142.562224326" watchObservedRunningTime="2026-03-20 00:11:28.130017452 +0000 UTC m=+142.563751506" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.160114 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" event={"ID":"61dc866d-abfc-4dea-a349-6635b614e189","Type":"ContainerStarted","Data":"d579082535260f939b4891c2f451da28a812d71d94490ea3d4e650e2a85d60a9"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.160182 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" event={"ID":"61dc866d-abfc-4dea-a349-6635b614e189","Type":"ContainerStarted","Data":"d2ee83c741ea53c03c0230e356aa598703ba5b024275f7855b2726f9106fcbd8"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.161388 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.180954 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h54ck" podStartSLOduration=118.180922468 podStartE2EDuration="1m58.180922468s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.161717452 +0000 UTC m=+142.595451506" watchObservedRunningTime="2026-03-20 00:11:28.180922468 +0000 UTC m=+142.614656532" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.197913 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" event={"ID":"59096bb7-5757-4196-96a5-f14e967998e7","Type":"ContainerStarted","Data":"09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.199277 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.200367 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-2xds7" event={"ID":"9613d763-dd08-4d6e-8cf3-ef60b7ef3211","Type":"ContainerStarted","Data":"50289cc84a1c9359f920e39a98030b6f342e5f274953ee08b15e479daa773a84"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.204095 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" event={"ID":"88acb60d-ae97-490e-bab2-b78f03e1b8c8","Type":"ContainerStarted","Data":"743c497484490a77cdd7198d1346241c8888df2544afcfa33f3b75bbfd4b5ba1"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.204768 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-hzg88" podStartSLOduration=118.204752225 podStartE2EDuration="1m58.204752225s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.182002376 +0000 UTC m=+142.615736430" watchObservedRunningTime="2026-03-20 00:11:28.204752225 +0000 UTC m=+142.638486279" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.205798 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" podStartSLOduration=118.205788782 podStartE2EDuration="1m58.205788782s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.205307689 +0000 UTC m=+142.639041753" watchObservedRunningTime="2026-03-20 00:11:28.205788782 +0000 UTC m=+142.639522846" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.218262 5106 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-xfn66 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.218327 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.220368 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.221566 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.721547789 +0000 UTC m=+143.155281843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.225198 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" event={"ID":"c0575414-b1d1-44db-b352-6f101cce8c8f","Type":"ContainerStarted","Data":"0a4f3f83f29ad67cb90b98330697e049112de4d79f824a9bbf1bb9e5843137d7"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.242399 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dg59t" event={"ID":"2de01ede-f866-4638-9351-ab1ef6392aba","Type":"ContainerStarted","Data":"292e55bc3695aa10b6055a85e23fe4172a2a28ce7a7a644a709b457441393b8d"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.247918 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-t49vx" podStartSLOduration=118.247903381 podStartE2EDuration="1m58.247903381s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.24553694 +0000 UTC m=+142.679270994" watchObservedRunningTime="2026-03-20 00:11:28.247903381 +0000 UTC m=+142.681637435" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.250148 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" event={"ID":"e6e237e2-c84f-498f-888a-4fdaa7af3eb8","Type":"ContainerStarted","Data":"062330ace64de5e79aa036a365d9cc017dc0d58a2e5a9f11d0c504b6fb0d5613"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.254496 5106 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-lw4rt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" start-of-body= Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.254690 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" podUID="e6e237e2-c84f-498f-888a-4fdaa7af3eb8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": dial tcp 10.217.0.43:5443: connect: connection refused" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.268858 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-svc7c" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.275168 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" event={"ID":"27db15f2-d153-4ecb-beb5-b139549dcb36","Type":"ContainerStarted","Data":"1fab0b7976252bc89556f5cad7b9b393fde64ec0a8d548a55b932dc3d11658a0"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.312404 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" event={"ID":"af8b1c72-0d76-40cc-9135-92bdefd2a461","Type":"ContainerStarted","Data":"0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.313565 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.316119 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" podStartSLOduration=118.316097705 podStartE2EDuration="1m58.316097705s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.313874138 +0000 UTC m=+142.747608192" watchObservedRunningTime="2026-03-20 00:11:28.316097705 +0000 UTC m=+142.749831759" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.325266 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.326335 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.826321629 +0000 UTC m=+143.260055683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.333914 5106 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-zbpp6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" start-of-body= Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.333967 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.337040 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" event={"ID":"b07e83a4-1ea1-490a-8c95-49627f697ee0","Type":"ContainerStarted","Data":"17eb3d9628e2830a763d4c2b0b00e6b6e3c50c3346fe8de26fc41230b4b6eb31"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.340565 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sprmn" podStartSLOduration=118.340548708 podStartE2EDuration="1m58.340548708s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.340537477 +0000 UTC m=+142.774271531" watchObservedRunningTime="2026-03-20 00:11:28.340548708 +0000 UTC m=+142.774282762" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.364539 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" event={"ID":"cf878343-818d-4ca7-a3ce-507df55ae4c5","Type":"ContainerStarted","Data":"55f280c0c5e42ef412eb7348ece4461afbc704831cb4a9996483fa620d199c8e"} Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.365278 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" gracePeriod=30 Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.366712 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.366756 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.367655 5106 patch_prober.go:28] interesting pod/console-operator-67c89758df-qxnjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.367703 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" podUID="aba3de43-9844-4e15-b900-5a48bac6f058" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.377989 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" podStartSLOduration=118.377974316 podStartE2EDuration="1m58.377974316s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.371680353 +0000 UTC m=+142.805414407" watchObservedRunningTime="2026-03-20 00:11:28.377974316 +0000 UTC m=+142.811708370" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.379220 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-qgpkp" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.392776 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-q54jx" podStartSLOduration=118.392761658 podStartE2EDuration="1m58.392761658s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.391822364 +0000 UTC m=+142.825556418" watchObservedRunningTime="2026-03-20 00:11:28.392761658 +0000 UTC m=+142.826495712" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.431110 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.432819 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:28.932794104 +0000 UTC m=+143.366528158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.477947 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-f42c8" podStartSLOduration=118.477933321 podStartE2EDuration="1m58.477933321s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.476857044 +0000 UTC m=+142.910591098" watchObservedRunningTime="2026-03-20 00:11:28.477933321 +0000 UTC m=+142.911667375" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.524497 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-v7lf8" podStartSLOduration=118.524481795 podStartE2EDuration="1m58.524481795s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:28.52310803 +0000 UTC m=+142.956842084" watchObservedRunningTime="2026-03-20 00:11:28.524481795 +0000 UTC m=+142.958215849" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.536340 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.536691 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.036680181 +0000 UTC m=+143.470414235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.549908 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:28 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:28 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:28 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.549988 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.638033 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.638463 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.138445823 +0000 UTC m=+143.572179877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.741626 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.742013 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.242000742 +0000 UTC m=+143.675734796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.842815 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.843274 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.343254911 +0000 UTC m=+143.776988965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.917731 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qtqct"] Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.932929 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.944614 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:28 crc kubenswrapper[5106]: E0320 00:11:28.944990 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.444972232 +0000 UTC m=+143.878706286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.964623 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Mar 20 00:11:28 crc kubenswrapper[5106]: I0320 00:11:28.966061 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtqct"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.048140 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.048310 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.548281115 +0000 UTC m=+143.982015169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.048566 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpgvz\" (UniqueName: \"kubernetes.io/projected/2902d42b-f752-4b77-9aef-994def9350ba-kube-api-access-mpgvz\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.048711 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.048737 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-utilities\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.048782 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-catalog-content\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.049188 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.549181268 +0000 UTC m=+143.982915322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.067045 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bpzzz"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.077110 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.081460 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.096500 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpzzz"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150049 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150194 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-utilities\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150241 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796wq\" (UniqueName: \"kubernetes.io/projected/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-kube-api-access-796wq\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150264 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mpgvz\" (UniqueName: \"kubernetes.io/projected/2902d42b-f752-4b77-9aef-994def9350ba-kube-api-access-mpgvz\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150312 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-utilities\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150338 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-catalog-content\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150356 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-catalog-content\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.150645 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.650617932 +0000 UTC m=+144.084351986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150804 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-catalog-content\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.150907 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-utilities\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.201075 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpgvz\" (UniqueName: \"kubernetes.io/projected/2902d42b-f752-4b77-9aef-994def9350ba-kube-api-access-mpgvz\") pod \"community-operators-qtqct\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.251974 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-utilities\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.252057 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-796wq\" (UniqueName: \"kubernetes.io/projected/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-kube-api-access-796wq\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.252108 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.252139 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-catalog-content\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.252618 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-utilities\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.252646 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-catalog-content\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.252876 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.752863797 +0000 UTC m=+144.186597851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.276746 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.292147 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-796wq\" (UniqueName: \"kubernetes.io/projected/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-kube-api-access-796wq\") pod \"certified-operators-bpzzz\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.298676 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mgd7w"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.316612 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.318051 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mgd7w"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.353122 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.353358 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-utilities\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.353397 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmhrk\" (UniqueName: \"kubernetes.io/projected/a342c56e-aefd-443c-b37a-af158660104d-kube-api-access-xmhrk\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.353475 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-catalog-content\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.353614 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.853592292 +0000 UTC m=+144.287326346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.394464 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.420829 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dg59t" event={"ID":"2de01ede-f866-4638-9351-ab1ef6392aba","Type":"ContainerStarted","Data":"d2b8201ac87d4a2b8b7d63b8a236c6ebd227c3b53e4976f08387f09cde01fb87"} Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.423617 5106 patch_prober.go:28] interesting pod/console-operator-67c89758df-qxnjl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.426286 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" podUID="aba3de43-9844-4e15-b900-5a48bac6f058" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.423917 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.427364 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.429840 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.430021 5106 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-zbpp6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" start-of-body= Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.430135 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": dial tcp 10.217.0.23:6443: connect: connection refused" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.430056 5106 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-xfn66 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.430503 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.476519 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dg59t" podStartSLOduration=10.476500471 podStartE2EDuration="10.476500471s" podCreationTimestamp="2026-03-20 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:29.46483084 +0000 UTC m=+143.898564894" watchObservedRunningTime="2026-03-20 00:11:29.476500471 +0000 UTC m=+143.910234515" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.477639 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b62gq"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.479425 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.479637 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-catalog-content\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.482135 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-catalog-content\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.482487 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:29.982473936 +0000 UTC m=+144.416207980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.482918 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-utilities\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.483234 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-utilities\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.483717 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmhrk\" (UniqueName: \"kubernetes.io/projected/a342c56e-aefd-443c-b37a-af158660104d-kube-api-access-xmhrk\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.488346 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.495078 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b62gq"] Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.521042 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmhrk\" (UniqueName: \"kubernetes.io/projected/a342c56e-aefd-443c-b37a-af158660104d-kube-api-access-xmhrk\") pod \"community-operators-mgd7w\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.536364 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:29 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:29 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:29 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.536445 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.585543 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.585699 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9td\" (UniqueName: \"kubernetes.io/projected/fe8416cb-a9a0-45bd-aec9-25549b0c4551-kube-api-access-md9td\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.585755 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-catalog-content\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.585807 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-utilities\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.585917 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.085899851 +0000 UTC m=+144.519633905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.676895 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.686763 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.686804 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-md9td\" (UniqueName: \"kubernetes.io/projected/fe8416cb-a9a0-45bd-aec9-25549b0c4551-kube-api-access-md9td\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.686840 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.686872 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-catalog-content\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.686893 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.687206 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.687272 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-catalog-content\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.687289 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.687312 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-utilities\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.687414 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.687568 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.18752722 +0000 UTC m=+144.621261274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.687766 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-utilities\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.688206 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.690390 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.694071 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.695040 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56-metrics-certs\") pod \"network-metrics-daemon-5qf4l\" (UID: \"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56\") " pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.695524 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.709861 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-md9td\" (UniqueName: \"kubernetes.io/projected/fe8416cb-a9a0-45bd-aec9-25549b0c4551-kube-api-access-md9td\") pod \"certified-operators-b62gq\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.783859 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.808344 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.808992 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.308970661 +0000 UTC m=+144.742704725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.838796 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.879717 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.888753 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5qf4l" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.897959 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Mar 20 00:11:29 crc kubenswrapper[5106]: I0320 00:11:29.915869 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:29 crc kubenswrapper[5106]: E0320 00:11:29.916245 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.416232636 +0000 UTC m=+144.849966680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.016524 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.016954 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.51692581 +0000 UTC m=+144.950659864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.121435 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.121994 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.621976168 +0000 UTC m=+145.055710222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.230936 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.231271 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.731255485 +0000 UTC m=+145.164989539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.284120 5106 ???:1] "http: TLS handshake error from 192.168.126.11:35676: no serving certificate available for the kubelet" Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.332664 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.333094 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.833063148 +0000 UTC m=+145.266797202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.424703 5106 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-lw4rt container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.425074 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" podUID="e6e237e2-c84f-498f-888a-4fdaa7af3eb8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.43:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.434150 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.434372 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:30.934356668 +0000 UTC m=+145.368090722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.503565 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" event={"ID":"f7dc02a2-0fb1-41e9-9d23-2565378a45a4","Type":"ContainerStarted","Data":"34662e80e323294f98be833907a49f1115a21a25cf3d195486e1f883d3f8959c"} Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.505338 5106 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-xfn66 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.506058 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.14:8080/healthz\": dial tcp 10.217.0.14:8080: connect: connection refused" Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.539522 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.539813 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.039800436 +0000 UTC m=+145.473534490 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.545364 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:30 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:30 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:30 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.545611 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.640147 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.645717 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.145690505 +0000 UTC m=+145.579424559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.757191 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.757514 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.257500887 +0000 UTC m=+145.691234941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.858514 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.859435 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.359414674 +0000 UTC m=+145.793148728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.864378 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bpzzz"] Mar 20 00:11:30 crc kubenswrapper[5106]: I0320 00:11:30.961441 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:30 crc kubenswrapper[5106]: E0320 00:11:30.961821 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.461806392 +0000 UTC m=+145.895540436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.062335 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.062659 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.562642471 +0000 UTC m=+145.996376525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.080885 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c7cgp"] Mar 20 00:11:31 crc kubenswrapper[5106]: W0320 00:11:31.150492 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda342c56e_aefd_443c_b37a_af158660104d.slice/crio-203e35c446c7602acc76bf478d5f444c12218e8409c46adfc347663d1c275dac WatchSource:0}: Error finding container 203e35c446c7602acc76bf478d5f444c12218e8409c46adfc347663d1c275dac: Status 404 returned error can't find the container with id 203e35c446c7602acc76bf478d5f444c12218e8409c46adfc347663d1c275dac Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.163400 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.163729 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.663716955 +0000 UTC m=+146.097451009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.176606 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mgd7w"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.177171 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.180126 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.204518 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7cgp"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.267108 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.267222 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.767201192 +0000 UTC m=+146.200935246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.267431 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.267500 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwm6t\" (UniqueName: \"kubernetes.io/projected/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-kube-api-access-gwm6t\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.267547 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-utilities\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.267663 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-catalog-content\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.267981 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.767973872 +0000 UTC m=+146.201707916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.291891 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b62gq"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.298014 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtqct"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.368787 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.368892 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-catalog-content\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.368961 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwm6t\" (UniqueName: \"kubernetes.io/projected/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-kube-api-access-gwm6t\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.368977 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-utilities\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.369140 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.869124827 +0000 UTC m=+146.302858881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.369503 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-catalog-content\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.369999 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-utilities\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.427884 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwm6t\" (UniqueName: \"kubernetes.io/projected/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-kube-api-access-gwm6t\") pod \"redhat-marketplace-c7cgp\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.433537 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5qf4l"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.472869 5106 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-wqms8 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]log ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]etcd ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/max-in-flight-filter ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 20 00:11:31 crc kubenswrapper[5106]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/project.openshift.io-projectcache ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/openshift.io-startinformers ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 20 00:11:31 crc kubenswrapper[5106]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 00:11:31 crc kubenswrapper[5106]: livez check failed Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.473183 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" podUID="17377ffd-aa79-4dee-bfea-6ae6b3026fd1" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.474554 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.475075 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:31.975045267 +0000 UTC m=+146.408779321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.484799 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwx2n"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.508124 5106 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-zbpp6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.23:6443/healthz\": context deadline exceeded" start-of-body= Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.508197 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.23:6443/healthz\": context deadline exceeded" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.517190 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.530519 5106 patch_prober.go:28] interesting pod/console-64d44f6ddf-vx9v6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.530610 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-vx9v6" podUID="74f7b3bf-429d-4b60-8b80-48300a789b1d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.561995 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:31 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:31 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:31 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.562069 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.570416 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwx2n"] Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.570453 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"304cb538b4d1149069f00f884f1e4189479a074bbdd60b06c853c765ecd5cfc7"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.570482 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.570496 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.570504 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"ed6d058b7be0b3d6a34b8fa6ede38a4b50272f88708cb8cae54ea0ab2fd592f8"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.571204 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.576408 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.576724 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.076706747 +0000 UTC m=+146.510440791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.596307 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgd7w" event={"ID":"a342c56e-aefd-443c-b37a-af158660104d","Type":"ContainerStarted","Data":"203e35c446c7602acc76bf478d5f444c12218e8409c46adfc347663d1c275dac"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.609187 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerStarted","Data":"60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.609234 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerStarted","Data":"d800a9cf0ee82125d37b382f8ec833454e5c8854f9fd2aa9778d6c772251fd40"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.621806 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" event={"ID":"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56","Type":"ContainerStarted","Data":"3df8700a2f667a8b018c2dcc53f8dcae42e4815f32ca345a94fbd34e592c2434"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.630434 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"b18a64cb63964c2f05029d6cfc1ebb0f6b8bb22bac59ffec07ec9eed2be6d0b2"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.651699 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerStarted","Data":"7e78a7183785c8e8270176a866bea0c6cff0c6280b8632ee66c40fec7618e129"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.672435 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerStarted","Data":"7907620905dd4d03808b26c0bb0cd14fcb2982ac052a90a6382c248b85850a66"} Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.680772 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldn2w\" (UniqueName: \"kubernetes.io/projected/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-kube-api-access-ldn2w\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.680839 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-catalog-content\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.680887 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.680935 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-utilities\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.682804 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.182788911 +0000 UTC m=+146.616522965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.781907 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.782150 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldn2w\" (UniqueName: \"kubernetes.io/projected/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-kube-api-access-ldn2w\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.782192 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-catalog-content\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.782253 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-utilities\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.782731 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.282708096 +0000 UTC m=+146.716442150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.782967 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-utilities\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.783060 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-catalog-content\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.818878 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldn2w\" (UniqueName: \"kubernetes.io/projected/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-kube-api-access-ldn2w\") pod \"redhat-marketplace-vwx2n\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.884255 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.884613 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.384597151 +0000 UTC m=+146.818331205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.941650 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.986831 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.987064 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.487034451 +0000 UTC m=+146.920768505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:31 crc kubenswrapper[5106]: I0320 00:11:31.987140 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:31 crc kubenswrapper[5106]: E0320 00:11:31.987550 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.487543334 +0000 UTC m=+146.921277388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.073217 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nv56p"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.088288 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.088508 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.588492025 +0000 UTC m=+147.022226079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.135495 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nv56p"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.135661 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.147573 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.171238 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.187978 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-8zd6p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.189395 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.189862 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.689846547 +0000 UTC m=+147.123580601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.222056 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.254868 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.260784 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.260986 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.265989 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.293233 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.293344 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-utilities\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.293434 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-catalog-content\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.293550 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hsm\" (UniqueName: \"kubernetes.io/projected/862d0f24-7d93-4dd5-a664-398213a26a24-kube-api-access-g7hsm\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.293677 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.793661652 +0000 UTC m=+147.227395706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.371674 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7cgp"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.427763 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.427818 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g7hsm\" (UniqueName: \"kubernetes.io/projected/862d0f24-7d93-4dd5-a664-398213a26a24-kube-api-access-g7hsm\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.427841 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.427862 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-utilities\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.427885 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.427918 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-catalog-content\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.428276 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-catalog-content\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.428767 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:32.928755787 +0000 UTC m=+147.362489841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.429088 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-utilities\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.472420 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7hsm\" (UniqueName: \"kubernetes.io/projected/862d0f24-7d93-4dd5-a664-398213a26a24-kube-api-access-g7hsm\") pod \"redhat-operators-nv56p\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.483535 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fvw5w"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.495289 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.506328 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.512031 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvw5w"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.531119 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.531310 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-utilities\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.531358 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.531397 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2hc\" (UniqueName: \"kubernetes.io/projected/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-kube-api-access-5q2hc\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.531418 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-catalog-content\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.531445 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.531513 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.031478954 +0000 UTC m=+147.465213008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.532779 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.537672 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.552526 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:32 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:32 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:32 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.552589 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.570172 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.632209 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2hc\" (UniqueName: \"kubernetes.io/projected/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-kube-api-access-5q2hc\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.632245 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-catalog-content\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.632265 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.632368 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-utilities\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.633567 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.133554865 +0000 UTC m=+147.567288919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.633723 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-catalog-content\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.633984 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-utilities\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.653903 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.654038 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.657915 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2hc\" (UniqueName: \"kubernetes.io/projected/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-kube-api-access-5q2hc\") pod \"redhat-operators-fvw5w\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.717741 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"70f401211cb3ced2c1801a4080b238d05fa6cafda488cf64579952918c3bed14"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.718555 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.733170 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.733865 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.233845249 +0000 UTC m=+147.667579303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.733928 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.734250 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.234242069 +0000 UTC m=+147.667976123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.734721 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"4c31fb107a587e3b59a6d77e9f8ea909b4766f9f0ce91861a93139cbe1ad89a3"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.734919 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.771006 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7cgp" event={"ID":"55a6a924-c50c-40e9-bce1-a4a8a636c5e4","Type":"ContainerStarted","Data":"8cf65a141ce68f290067ef5012cc7c3c95ac1a7df52d5b8baa642c25c4f1d171"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.781922 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwx2n"] Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.786252 5106 generic.go:358] "Generic (PLEG): container finished" podID="a342c56e-aefd-443c-b37a-af158660104d" containerID="9fffd286dd22c6ace19dba6b50dd5103d5964efc67c7c59e4166d24edc757b06" exitCode=0 Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.786464 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgd7w" event={"ID":"a342c56e-aefd-443c-b37a-af158660104d","Type":"ContainerDied","Data":"9fffd286dd22c6ace19dba6b50dd5103d5964efc67c7c59e4166d24edc757b06"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.789598 5106 generic.go:358] "Generic (PLEG): container finished" podID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerID="60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254" exitCode=0 Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.789727 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerDied","Data":"60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.794621 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"f00fe9003e579e5b90a2b64d053b25330ced9e1bf42e815e27ccf1b383b4ecfe"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.826172 5106 generic.go:358] "Generic (PLEG): container finished" podID="2902d42b-f752-4b77-9aef-994def9350ba" containerID="5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea" exitCode=0 Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.826308 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerDied","Data":"5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.834626 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.835013 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.334995565 +0000 UTC m=+147.768729619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.864537 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.865808 5106 generic.go:358] "Generic (PLEG): container finished" podID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerID="aa1c7a3fe6484625fd77947d76c0c5f3d78c000976689d8b0d9a97362c87e2b8" exitCode=0 Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.867527 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerDied","Data":"aa1c7a3fe6484625fd77947d76c0c5f3d78c000976689d8b0d9a97362c87e2b8"} Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.908938 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:11:32 crc kubenswrapper[5106]: I0320 00:11:32.936198 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:32 crc kubenswrapper[5106]: E0320 00:11:32.939228 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.439212491 +0000 UTC m=+147.872946545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.039687 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:33 crc kubenswrapper[5106]: E0320 00:11:33.042000 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.541978249 +0000 UTC m=+147.975712303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.143813 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:33 crc kubenswrapper[5106]: E0320 00:11:33.144129 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.644116711 +0000 UTC m=+148.077850765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.235316 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nv56p"] Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.247891 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:33 crc kubenswrapper[5106]: E0320 00:11:33.248060 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.748028669 +0000 UTC m=+148.181762713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.248955 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:33 crc kubenswrapper[5106]: E0320 00:11:33.249498 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:33.749482377 +0000 UTC m=+148.183216431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.934992 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:33 crc kubenswrapper[5106]: E0320 00:11:33.935368 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.935348408 +0000 UTC m=+149.369082472 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.946809 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:33 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:33 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:33 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:33 crc kubenswrapper[5106]: I0320 00:11:33.946871 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.035944 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.036506 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.536484144 +0000 UTC m=+148.970218198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.036744 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerStarted","Data":"b1157e0dcc789467bde3e43a0dadbc7ca284a05f501158cfe68d2c02904ac431"} Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.091761 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fvw5w"] Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.131163 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerStarted","Data":"2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406"} Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.131278 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerStarted","Data":"b795fd9d886b1008651dba585f8bdff443717cd83eba0e4ea93624cd09452adc"} Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.137217 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.137877 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.637862327 +0000 UTC m=+149.071596381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.177812 5106 generic.go:358] "Generic (PLEG): container finished" podID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerID="d59737e0972e778adeb492427f2977907638bd92891b8cb10cea9d1fa8f483f5" exitCode=0 Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.177949 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7cgp" event={"ID":"55a6a924-c50c-40e9-bce1-a4a8a636c5e4","Type":"ContainerDied","Data":"d59737e0972e778adeb492427f2977907638bd92891b8cb10cea9d1fa8f483f5"} Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.201894 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" event={"ID":"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56","Type":"ContainerStarted","Data":"8e68fe8c06962dd355486309229c6072d548c60202a324e55edf8d3c31273438"} Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.239168 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.240654 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.740633025 +0000 UTC m=+149.174367099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.341239 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.341625 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.841570836 +0000 UTC m=+149.275304890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.361977 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Mar 20 00:11:34 crc kubenswrapper[5106]: W0320 00:11:34.426563 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd9aff4e0_f3cd_461e_8dd8_71c798569be2.slice/crio-4ec17e6fc0617e45f90e9b5dba139655c48c5f0c087330cb05b1ee486f285171 WatchSource:0}: Error finding container 4ec17e6fc0617e45f90e9b5dba139655c48c5f0c087330cb05b1ee486f285171: Status 404 returned error can't find the container with id 4ec17e6fc0617e45f90e9b5dba139655c48c5f0c087330cb05b1ee486f285171 Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.441961 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.442097 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.942073706 +0000 UTC m=+149.375807760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.442442 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.442824 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:34.942808125 +0000 UTC m=+149.376542179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.540722 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:34 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:34 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:34 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.541107 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.546099 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.549707 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.049678199 +0000 UTC m=+149.483412253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.652476 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.652960 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.152945011 +0000 UTC m=+149.586679065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.693529 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.697021 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.700837 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.703489 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.704848 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.753781 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.754743 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.754892 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.755187 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.255142604 +0000 UTC m=+149.688876658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.856929 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.856977 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.857004 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.857065 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.857283 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.357272506 +0000 UTC m=+149.791006560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.890450 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.960192 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.960389 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.460358031 +0000 UTC m=+149.894092085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:34 crc kubenswrapper[5106]: I0320 00:11:34.960495 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:34 crc kubenswrapper[5106]: E0320 00:11:34.960840 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.460827094 +0000 UTC m=+149.894561148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.038027 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.061679 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.062066 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.562045542 +0000 UTC m=+149.995779596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.163701 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.163991 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.663978258 +0000 UTC m=+150.097712312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.222053 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5qf4l" event={"ID":"64ed3ec4-d8f8-4a23-b44c-e4ff8aec3a56","Type":"ContainerStarted","Data":"d71fd1e2285bfec097bae12a08c9e0182f218e67b86ec8886bb1021b348c346f"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.223385 5106 generic.go:358] "Generic (PLEG): container finished" podID="88da1299-0802-4745-8701-7de465542299" containerID="8d156bc0fb97314d5e1ebf2b69675d2012547fd513633de185f57f44179855e2" exitCode=0 Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.223476 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" event={"ID":"88da1299-0802-4745-8701-7de465542299","Type":"ContainerDied","Data":"8d156bc0fb97314d5e1ebf2b69675d2012547fd513633de185f57f44179855e2"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.225126 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"d9aff4e0-f3cd-461e-8dd8-71c798569be2","Type":"ContainerStarted","Data":"b4908e2276f60da7018fede1b1660cd4d5caadab2ac6d1e6648d9613e978d5aa"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.225163 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"d9aff4e0-f3cd-461e-8dd8-71c798569be2","Type":"ContainerStarted","Data":"4ec17e6fc0617e45f90e9b5dba139655c48c5f0c087330cb05b1ee486f285171"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.226712 5106 generic.go:358] "Generic (PLEG): container finished" podID="862d0f24-7d93-4dd5-a664-398213a26a24" containerID="b704909afe5cb44892692a2875bebe422d5f85b1adc3ff025a106f0f85e325b0" exitCode=0 Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.226779 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerDied","Data":"b704909afe5cb44892692a2875bebe422d5f85b1adc3ff025a106f0f85e325b0"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.230007 5106 generic.go:358] "Generic (PLEG): container finished" podID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerID="fd25608c3a0b7dfe658a9be9184410c5f0b162f924e3dac2e025781182535ef4" exitCode=0 Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.230110 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerDied","Data":"fd25608c3a0b7dfe658a9be9184410c5f0b162f924e3dac2e025781182535ef4"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.230889 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerStarted","Data":"a42e7cc64633ccace209510f988e9db9bbb6174d5d192bfc5b9f5764820170f5"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.243064 5106 generic.go:358] "Generic (PLEG): container finished" podID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerID="2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406" exitCode=0 Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.244158 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerDied","Data":"2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406"} Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.248456 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5qf4l" podStartSLOduration=125.248434153 podStartE2EDuration="2m5.248434153s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:35.240508728 +0000 UTC m=+149.674242802" watchObservedRunningTime="2026-03-20 00:11:35.248434153 +0000 UTC m=+149.682168207" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.257825 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.260300 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=3.260269259 podStartE2EDuration="3.260269259s" podCreationTimestamp="2026-03-20 00:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:35.258219846 +0000 UTC m=+149.691953900" watchObservedRunningTime="2026-03-20 00:11:35.260269259 +0000 UTC m=+149.694003313" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.269001 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.269374 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.271233 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.771217112 +0000 UTC m=+150.204951166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.274494 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.274543 5106 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.371832 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.372209 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.872193484 +0000 UTC m=+150.305927538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.430206 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.435466 5106 ???:1] "http: TLS handshake error from 192.168.126.11:35684: no serving certificate available for the kubelet" Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.473001 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.473181 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.973148336 +0000 UTC m=+150.406882390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.473264 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.473647 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:35.973637759 +0000 UTC m=+150.407371813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.536299 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:35 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:35 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:35 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.536362 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.574495 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.574792 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.074774075 +0000 UTC m=+150.508508129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.676744 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.677180 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.177161823 +0000 UTC m=+150.610895877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.778045 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.778233 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.278202187 +0000 UTC m=+150.711936241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.778839 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.779146 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.279137801 +0000 UTC m=+150.712871855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.883186 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.883307 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.383287675 +0000 UTC m=+150.817021729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.883476 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.883867 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.38385704 +0000 UTC m=+150.817591094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.984790 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.984881 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.484861133 +0000 UTC m=+150.918595187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:35 crc kubenswrapper[5106]: I0320 00:11:35.985121 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:35 crc kubenswrapper[5106]: E0320 00:11:35.985425 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.485415467 +0000 UTC m=+150.919149521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.085843 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.086184 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.586166853 +0000 UTC m=+151.019900907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.195409 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.196289 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.696253091 +0000 UTC m=+151.129987145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.271892 5106 generic.go:358] "Generic (PLEG): container finished" podID="d9aff4e0-f3cd-461e-8dd8-71c798569be2" containerID="b4908e2276f60da7018fede1b1660cd4d5caadab2ac6d1e6648d9613e978d5aa" exitCode=0 Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.271959 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"d9aff4e0-f3cd-461e-8dd8-71c798569be2","Type":"ContainerDied","Data":"b4908e2276f60da7018fede1b1660cd4d5caadab2ac6d1e6648d9613e978d5aa"} Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.278951 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ee7b9f5d-baa1-48d9-a71d-2565029e33b2","Type":"ContainerStarted","Data":"93ce92504a2b02641df7ea72cfde01b3454fa7cad60852ceb6b7574822a77f96"} Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.279041 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ee7b9f5d-baa1-48d9-a71d-2565029e33b2","Type":"ContainerStarted","Data":"8d147587188fa2fb04574b3c42cee51f861172e9cd2867f104551fd1a72d1903"} Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.296726 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.296996 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.796970316 +0000 UTC m=+151.230704370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.316849 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.3168299 podStartE2EDuration="2.3168299s" podCreationTimestamp="2026-03-20 00:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:36.314267073 +0000 UTC m=+150.748001127" watchObservedRunningTime="2026-03-20 00:11:36.3168299 +0000 UTC m=+150.750563954" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.401859 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.402904 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:36.902886816 +0000 UTC m=+151.336620860 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.470746 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.476660 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-wqms8" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.504751 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.505827 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.005809658 +0000 UTC m=+151.439543712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.578872 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:36 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:36 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:36 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.578944 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.612606 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.612935 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.112922619 +0000 UTC m=+151.546656673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.713542 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.714315 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.214288261 +0000 UTC m=+151.648022315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.762095 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.815391 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88da1299-0802-4745-8701-7de465542299-config-volume\") pod \"88da1299-0802-4745-8701-7de465542299\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.815481 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88da1299-0802-4745-8701-7de465542299-secret-volume\") pod \"88da1299-0802-4745-8701-7de465542299\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.815503 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lzx8\" (UniqueName: \"kubernetes.io/projected/88da1299-0802-4745-8701-7de465542299-kube-api-access-6lzx8\") pod \"88da1299-0802-4745-8701-7de465542299\" (UID: \"88da1299-0802-4745-8701-7de465542299\") " Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.815783 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.816200 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.316186017 +0000 UTC m=+151.749920071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.816440 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88da1299-0802-4745-8701-7de465542299-config-volume" (OuterVolumeSpecName: "config-volume") pod "88da1299-0802-4745-8701-7de465542299" (UID: "88da1299-0802-4745-8701-7de465542299"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.864702 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88da1299-0802-4745-8701-7de465542299-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "88da1299-0802-4745-8701-7de465542299" (UID: "88da1299-0802-4745-8701-7de465542299"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.865487 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88da1299-0802-4745-8701-7de465542299-kube-api-access-6lzx8" (OuterVolumeSpecName: "kube-api-access-6lzx8") pod "88da1299-0802-4745-8701-7de465542299" (UID: "88da1299-0802-4745-8701-7de465542299"). InnerVolumeSpecName "kube-api-access-6lzx8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.922371 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.922893 5106 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/88da1299-0802-4745-8701-7de465542299-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.922922 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6lzx8\" (UniqueName: \"kubernetes.io/projected/88da1299-0802-4745-8701-7de465542299-kube-api-access-6lzx8\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:36 crc kubenswrapper[5106]: I0320 00:11:36.922936 5106 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88da1299-0802-4745-8701-7de465542299-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:36 crc kubenswrapper[5106]: E0320 00:11:36.924058 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.424028526 +0000 UTC m=+151.857762590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.026450 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.026812 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.526797984 +0000 UTC m=+151.960532038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.037009 5106 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.127361 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.127709 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.627691824 +0000 UTC m=+152.061425878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.237549 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.237914 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.737902545 +0000 UTC m=+152.171636599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.296681 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" event={"ID":"f7dc02a2-0fb1-41e9-9d23-2565378a45a4","Type":"ContainerStarted","Data":"7eb610e9f56a15c0d7ab47af568177565fe446cb4a7609fa00489abba5720659"} Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.296747 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" event={"ID":"f7dc02a2-0fb1-41e9-9d23-2565378a45a4","Type":"ContainerStarted","Data":"2ceea2863176faeb4898bc8858c0113e10b8cc7de99532631cb1897382a0efed"} Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.299024 5106 generic.go:358] "Generic (PLEG): container finished" podID="ee7b9f5d-baa1-48d9-a71d-2565029e33b2" containerID="93ce92504a2b02641df7ea72cfde01b3454fa7cad60852ceb6b7574822a77f96" exitCode=0 Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.299125 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ee7b9f5d-baa1-48d9-a71d-2565029e33b2","Type":"ContainerDied","Data":"93ce92504a2b02641df7ea72cfde01b3454fa7cad60852ceb6b7574822a77f96"} Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.311520 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.311607 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566080-tg7xz" event={"ID":"88da1299-0802-4745-8701-7de465542299","Type":"ContainerDied","Data":"8a85498319c3c126fd820310d3bbc7aca96e6e06d5709aa7b5190c0b851d6eea"} Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.311640 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a85498319c3c126fd820310d3bbc7aca96e6e06d5709aa7b5190c0b851d6eea" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.338391 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.339172 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.839150844 +0000 UTC m=+152.272884898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.440109 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.442023 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:37.942007845 +0000 UTC m=+152.375741899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.537215 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:37 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:37 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:37 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.537282 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.544114 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.544328 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:38.044296611 +0000 UTC m=+152.478030665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.544721 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.545177 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:38.045167343 +0000 UTC m=+152.478901397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.654436 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.654669 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:38.154649455 +0000 UTC m=+152.588383509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.654731 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.658989 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:38.158971847 +0000 UTC m=+152.592705901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.759895 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.760509 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-03-20 00:11:38.260491903 +0000 UTC m=+152.694225957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.864119 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:37 crc kubenswrapper[5106]: E0320 00:11:37.864482 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-03-20 00:11:38.364462372 +0000 UTC m=+152.798196496 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-jhgps" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.871127 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.944938 5106 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-20T00:11:37.037031729Z","UUID":"e2b7ed8f-3a02-4c76-b49f-3197049ef947","Handler":null,"Name":"","Endpoint":""} Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.947617 5106 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.947649 5106 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.964638 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kubelet-dir\") pod \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.964703 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kube-api-access\") pod \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\" (UID: \"d9aff4e0-f3cd-461e-8dd8-71c798569be2\") " Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.964894 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.965347 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d9aff4e0-f3cd-461e-8dd8-71c798569be2" (UID: "d9aff4e0-f3cd-461e-8dd8-71c798569be2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.970875 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Mar 20 00:11:37 crc kubenswrapper[5106]: I0320 00:11:37.988197 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d9aff4e0-f3cd-461e-8dd8-71c798569be2" (UID: "d9aff4e0-f3cd-461e-8dd8-71c798569be2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.066542 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.066716 5106 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.066727 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9aff4e0-f3cd-461e-8dd8-71c798569be2-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.071163 5106 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.071192 5106 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.132211 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-jhgps\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.145222 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dg59t" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.335511 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" event={"ID":"f7dc02a2-0fb1-41e9-9d23-2565378a45a4","Type":"ContainerStarted","Data":"f9c0f21e47fcc5e703a7750d4d36119d6c2739736451c2e96e17f57c1e09f156"} Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.363233 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.376274 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"d9aff4e0-f3cd-461e-8dd8-71c798569be2","Type":"ContainerDied","Data":"4ec17e6fc0617e45f90e9b5dba139655c48c5f0c087330cb05b1ee486f285171"} Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.376337 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec17e6fc0617e45f90e9b5dba139655c48c5f0c087330cb05b1ee486f285171" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.378099 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8fdp6" podStartSLOduration=19.378081948 podStartE2EDuration="19.378081948s" podCreationTimestamp="2026-03-20 00:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:38.377510344 +0000 UTC m=+152.811244398" watchObservedRunningTime="2026-03-20 00:11:38.378081948 +0000 UTC m=+152.811816002" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.383164 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.536412 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:38 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:38 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:38 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:38 crc kubenswrapper[5106]: I0320 00:11:38.536803 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.016082 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.028999 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jhgps"] Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.096075 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kube-api-access\") pod \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.096126 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kubelet-dir\") pod \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\" (UID: \"ee7b9f5d-baa1-48d9-a71d-2565029e33b2\") " Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.096619 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee7b9f5d-baa1-48d9-a71d-2565029e33b2" (UID: "ee7b9f5d-baa1-48d9-a71d-2565029e33b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.105294 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee7b9f5d-baa1-48d9-a71d-2565029e33b2" (UID: "ee7b9f5d-baa1-48d9-a71d-2565029e33b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.174143 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.197523 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.197556 5106 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee7b9f5d-baa1-48d9-a71d-2565029e33b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.391396 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"ee7b9f5d-baa1-48d9-a71d-2565029e33b2","Type":"ContainerDied","Data":"8d147587188fa2fb04574b3c42cee51f861172e9cd2867f104551fd1a72d1903"} Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.391439 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d147587188fa2fb04574b3c42cee51f861172e9cd2867f104551fd1a72d1903" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.391512 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.393526 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" event={"ID":"00c02264-3068-4287-a30a-13b0003bf5e1","Type":"ContainerStarted","Data":"85522d9bbf7329891ad6f933eca0439ee543787d92231736b39ac6b3f5bd1a46"} Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.424408 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.424532 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.428472 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-lw4rt" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.428593 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-qxnjl" Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.534940 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:39 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:39 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:39 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:39 crc kubenswrapper[5106]: I0320 00:11:39.535002 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.414418 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" event={"ID":"00c02264-3068-4287-a30a-13b0003bf5e1","Type":"ContainerStarted","Data":"3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193"} Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.414757 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.433414 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" podStartSLOduration=130.433397703 podStartE2EDuration="2m10.433397703s" podCreationTimestamp="2026-03-20 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:11:40.430927689 +0000 UTC m=+154.864661753" watchObservedRunningTime="2026-03-20 00:11:40.433397703 +0000 UTC m=+154.867131757" Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.509915 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.511325 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.534331 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:40 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:40 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:40 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.534810 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.960877 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-cp4kp"] Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.961386 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerName="controller-manager" containerID="cri-o://b52828f9dc5580c60b2d55e439ac6b138baf5f0e19972535af21ac7d694359f8" gracePeriod=30 Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.975227 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g"] Mar 20 00:11:40 crc kubenswrapper[5106]: I0320 00:11:40.975479 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" containerName="route-controller-manager" containerID="cri-o://2be1cee4bd91a76d9e347bc4fca7f5da503d437aebc51672c85d45bf07cfc987" gracePeriod=30 Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.421749 5106 generic.go:358] "Generic (PLEG): container finished" podID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerID="b52828f9dc5580c60b2d55e439ac6b138baf5f0e19972535af21ac7d694359f8" exitCode=0 Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.422032 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" event={"ID":"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38","Type":"ContainerDied","Data":"b52828f9dc5580c60b2d55e439ac6b138baf5f0e19972535af21ac7d694359f8"} Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.423805 5106 generic.go:358] "Generic (PLEG): container finished" podID="8539a810-4a95-4205-99c6-30b6362cfa01" containerID="2be1cee4bd91a76d9e347bc4fca7f5da503d437aebc51672c85d45bf07cfc987" exitCode=0 Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.423899 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" event={"ID":"8539a810-4a95-4205-99c6-30b6362cfa01","Type":"ContainerDied","Data":"2be1cee4bd91a76d9e347bc4fca7f5da503d437aebc51672c85d45bf07cfc987"} Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.489277 5106 patch_prober.go:28] interesting pod/console-64d44f6ddf-vx9v6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.489658 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-vx9v6" podUID="74f7b3bf-429d-4b60-8b80-48300a789b1d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.536014 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:41 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:41 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:41 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:41 crc kubenswrapper[5106]: I0320 00:11:41.536076 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:42 crc kubenswrapper[5106]: I0320 00:11:42.535269 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:42 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:42 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:42 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:42 crc kubenswrapper[5106]: I0320 00:11:42.535644 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:42 crc kubenswrapper[5106]: I0320 00:11:42.652397 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:42 crc kubenswrapper[5106]: I0320 00:11:42.652459 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:43 crc kubenswrapper[5106]: I0320 00:11:43.534527 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:43 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:43 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:43 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:43 crc kubenswrapper[5106]: I0320 00:11:43.534621 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:44 crc kubenswrapper[5106]: I0320 00:11:44.251049 5106 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-cp4kp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 20 00:11:44 crc kubenswrapper[5106]: I0320 00:11:44.251107 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 20 00:11:44 crc kubenswrapper[5106]: I0320 00:11:44.252600 5106 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-crd8g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 20 00:11:44 crc kubenswrapper[5106]: I0320 00:11:44.252632 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 20 00:11:44 crc kubenswrapper[5106]: I0320 00:11:44.534813 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:44 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:44 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:44 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:44 crc kubenswrapper[5106]: I0320 00:11:44.534875 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:45 crc kubenswrapper[5106]: E0320 00:11:45.251822 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:45 crc kubenswrapper[5106]: E0320 00:11:45.253341 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:45 crc kubenswrapper[5106]: E0320 00:11:45.254410 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:45 crc kubenswrapper[5106]: E0320 00:11:45.254450 5106 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Mar 20 00:11:45 crc kubenswrapper[5106]: I0320 00:11:45.535316 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:45 crc kubenswrapper[5106]: [-]has-synced failed: reason withheld Mar 20 00:11:45 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:45 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:45 crc kubenswrapper[5106]: I0320 00:11:45.535391 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:45 crc kubenswrapper[5106]: I0320 00:11:45.702176 5106 ???:1] "http: TLS handshake error from 192.168.126.11:33820: no serving certificate available for the kubelet" Mar 20 00:11:46 crc kubenswrapper[5106]: I0320 00:11:46.534847 5106 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vzb7m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 00:11:46 crc kubenswrapper[5106]: [+]has-synced ok Mar 20 00:11:46 crc kubenswrapper[5106]: [+]process-running ok Mar 20 00:11:46 crc kubenswrapper[5106]: healthz check failed Mar 20 00:11:46 crc kubenswrapper[5106]: I0320 00:11:46.534925 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" podUID="a134eee6-8b26-4a27-8fbe-6fbc51787dc4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 00:11:47 crc kubenswrapper[5106]: I0320 00:11:47.535815 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:47 crc kubenswrapper[5106]: I0320 00:11:47.538383 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-vzb7m" Mar 20 00:11:49 crc kubenswrapper[5106]: I0320 00:11:49.427247 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:49 crc kubenswrapper[5106]: I0320 00:11:49.427605 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.755985 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.761103 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.794093 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv"] Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795141 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ee7b9f5d-baa1-48d9-a71d-2565029e33b2" containerName="pruner" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795160 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee7b9f5d-baa1-48d9-a71d-2565029e33b2" containerName="pruner" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795176 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" containerName="route-controller-manager" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795188 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" containerName="route-controller-manager" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795225 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerName="controller-manager" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795235 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerName="controller-manager" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795255 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="88da1299-0802-4745-8701-7de465542299" containerName="collect-profiles" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795264 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="88da1299-0802-4745-8701-7de465542299" containerName="collect-profiles" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795292 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9aff4e0-f3cd-461e-8dd8-71c798569be2" containerName="pruner" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795301 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9aff4e0-f3cd-461e-8dd8-71c798569be2" containerName="pruner" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795446 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9aff4e0-f3cd-461e-8dd8-71c798569be2" containerName="pruner" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795460 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" containerName="route-controller-manager" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795476 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="88da1299-0802-4745-8701-7de465542299" containerName="collect-profiles" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795491 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" containerName="controller-manager" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.795505 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="ee7b9f5d-baa1-48d9-a71d-2565029e33b2" containerName="pruner" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808359 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-client-ca\") pod \"8539a810-4a95-4205-99c6-30b6362cfa01\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808459 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-proxy-ca-bundles\") pod \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808556 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-kube-api-access-45sxg\") pod \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808632 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8539a810-4a95-4205-99c6-30b6362cfa01-serving-cert\") pod \"8539a810-4a95-4205-99c6-30b6362cfa01\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808741 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-tmp\") pod \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808768 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-serving-cert\") pod \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808800 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-client-ca\") pod \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808826 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-config\") pod \"8539a810-4a95-4205-99c6-30b6362cfa01\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808858 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r956k\" (UniqueName: \"kubernetes.io/projected/8539a810-4a95-4205-99c6-30b6362cfa01-kube-api-access-r956k\") pod \"8539a810-4a95-4205-99c6-30b6362cfa01\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808874 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-config\") pod \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\" (UID: \"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.808908 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8539a810-4a95-4205-99c6-30b6362cfa01-tmp\") pod \"8539a810-4a95-4205-99c6-30b6362cfa01\" (UID: \"8539a810-4a95-4205-99c6-30b6362cfa01\") " Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.809557 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8539a810-4a95-4205-99c6-30b6362cfa01-tmp" (OuterVolumeSpecName: "tmp") pod "8539a810-4a95-4205-99c6-30b6362cfa01" (UID: "8539a810-4a95-4205-99c6-30b6362cfa01"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.809669 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" (UID: "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.809400 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-client-ca" (OuterVolumeSpecName: "client-ca") pod "8539a810-4a95-4205-99c6-30b6362cfa01" (UID: "8539a810-4a95-4205-99c6-30b6362cfa01"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.810294 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" (UID: "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.810947 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-tmp" (OuterVolumeSpecName: "tmp") pod "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" (UID: "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.811500 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-config" (OuterVolumeSpecName: "config") pod "8539a810-4a95-4205-99c6-30b6362cfa01" (UID: "8539a810-4a95-4205-99c6-30b6362cfa01"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.811129 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-config" (OuterVolumeSpecName: "config") pod "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" (UID: "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.817806 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" (UID: "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.820020 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-kube-api-access-45sxg" (OuterVolumeSpecName: "kube-api-access-45sxg") pod "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" (UID: "6ae55f47-30bb-45c6-bd6c-7fa0c7810d38"). InnerVolumeSpecName "kube-api-access-45sxg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.821198 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8539a810-4a95-4205-99c6-30b6362cfa01-kube-api-access-r956k" (OuterVolumeSpecName: "kube-api-access-r956k") pod "8539a810-4a95-4205-99c6-30b6362cfa01" (UID: "8539a810-4a95-4205-99c6-30b6362cfa01"). InnerVolumeSpecName "kube-api-access-r956k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.828705 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8539a810-4a95-4205-99c6-30b6362cfa01-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8539a810-4a95-4205-99c6-30b6362cfa01" (UID: "8539a810-4a95-4205-99c6-30b6362cfa01"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910370 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910420 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910433 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r956k\" (UniqueName: \"kubernetes.io/projected/8539a810-4a95-4205-99c6-30b6362cfa01-kube-api-access-r956k\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910447 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910458 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8539a810-4a95-4205-99c6-30b6362cfa01-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910474 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8539a810-4a95-4205-99c6-30b6362cfa01-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910484 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910496 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45sxg\" (UniqueName: \"kubernetes.io/projected/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-kube-api-access-45sxg\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910507 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8539a810-4a95-4205-99c6-30b6362cfa01-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910517 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:50 crc kubenswrapper[5106]: I0320 00:11:50.910529 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.343457 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.357823 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.357875 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b444678cb-qz52g"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.416469 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d83e6fda-89b8-4659-8dfc-04d8a2c10605-tmp\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.416519 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwp2s\" (UniqueName: \"kubernetes.io/projected/d83e6fda-89b8-4659-8dfc-04d8a2c10605-kube-api-access-kwp2s\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.416541 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-config\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.416600 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83e6fda-89b8-4659-8dfc-04d8a2c10605-serving-cert\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.417033 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-client-ca\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.518223 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-config\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.518368 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83e6fda-89b8-4659-8dfc-04d8a2c10605-serving-cert\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.518497 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-client-ca\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.518538 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d83e6fda-89b8-4659-8dfc-04d8a2c10605-tmp\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.518567 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kwp2s\" (UniqueName: \"kubernetes.io/projected/d83e6fda-89b8-4659-8dfc-04d8a2c10605-kube-api-access-kwp2s\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.519616 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-config\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.520078 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d83e6fda-89b8-4659-8dfc-04d8a2c10605-tmp\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.520337 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-client-ca\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.534973 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83e6fda-89b8-4659-8dfc-04d8a2c10605-serving-cert\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.538376 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwp2s\" (UniqueName: \"kubernetes.io/projected/d83e6fda-89b8-4659-8dfc-04d8a2c10605-kube-api-access-kwp2s\") pod \"route-controller-manager-64bfc6b5fd-xxnkv\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.660769 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804202 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b444678cb-qz52g"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804321 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804337 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" event={"ID":"8539a810-4a95-4205-99c6-30b6362cfa01","Type":"ContainerDied","Data":"05f7bfed10d161aa463707c6e400bb710f9cb7be8b557d6d665550e52b441988"} Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804360 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" event={"ID":"6ae55f47-30bb-45c6-bd6c-7fa0c7810d38","Type":"ContainerDied","Data":"aab7aff0f9121715749223b0f541a236f4c1cacb3031269d8a82f89ff4d47c41"} Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804388 5106 scope.go:117] "RemoveContainer" containerID="2be1cee4bd91a76d9e347bc4fca7f5da503d437aebc51672c85d45bf07cfc987" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804624 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.805019 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.804564 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-cp4kp" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.809588 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-vx9v6" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.825815 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-config\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.825859 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-proxy-ca-bundles\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.825890 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-tmp\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.825906 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66wk\" (UniqueName: \"kubernetes.io/projected/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-kube-api-access-p66wk\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.825978 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-client-ca\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.826011 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-serving-cert\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.859028 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.864421 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-crd8g"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.895612 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-cp4kp"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.902120 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-cp4kp"] Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.926710 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-config\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.926771 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-proxy-ca-bundles\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.926818 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-tmp\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.926842 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p66wk\" (UniqueName: \"kubernetes.io/projected/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-kube-api-access-p66wk\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.927315 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-client-ca\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.927362 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-serving-cert\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.928431 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-proxy-ca-bundles\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.928742 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-tmp\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.928999 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-client-ca\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.930012 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-config\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.937466 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-serving-cert\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:51 crc kubenswrapper[5106]: I0320 00:11:51.955972 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66wk\" (UniqueName: \"kubernetes.io/projected/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-kube-api-access-p66wk\") pod \"controller-manager-5b444678cb-qz52g\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.134394 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.652697 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.652775 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.652824 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.653251 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"bbf670bc57a8949f7f83a47e63ef36968060509542f98904b5600153628a4dcf"} pod="openshift-console/downloads-747b44746d-ss8gd" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.653308 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" containerID="cri-o://bbf670bc57a8949f7f83a47e63ef36968060509542f98904b5600153628a4dcf" gracePeriod=2 Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.653332 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:11:52 crc kubenswrapper[5106]: I0320 00:11:52.653414 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:11:53 crc kubenswrapper[5106]: I0320 00:11:53.169080 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae55f47-30bb-45c6-bd6c-7fa0c7810d38" path="/var/lib/kubelet/pods/6ae55f47-30bb-45c6-bd6c-7fa0c7810d38/volumes" Mar 20 00:11:53 crc kubenswrapper[5106]: I0320 00:11:53.170099 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8539a810-4a95-4205-99c6-30b6362cfa01" path="/var/lib/kubelet/pods/8539a810-4a95-4205-99c6-30b6362cfa01/volumes" Mar 20 00:11:54 crc kubenswrapper[5106]: I0320 00:11:54.535996 5106 generic.go:358] "Generic (PLEG): container finished" podID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerID="bbf670bc57a8949f7f83a47e63ef36968060509542f98904b5600153628a4dcf" exitCode=0 Mar 20 00:11:54 crc kubenswrapper[5106]: I0320 00:11:54.536081 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ss8gd" event={"ID":"9662276f-9936-4ed0-a464-c509bbaaa7a0","Type":"ContainerDied","Data":"bbf670bc57a8949f7f83a47e63ef36968060509542f98904b5600153628a4dcf"} Mar 20 00:11:55 crc kubenswrapper[5106]: E0320 00:11:55.252354 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:55 crc kubenswrapper[5106]: E0320 00:11:55.254006 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:55 crc kubenswrapper[5106]: E0320 00:11:55.257399 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:11:55 crc kubenswrapper[5106]: E0320 00:11:55.257437 5106 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Mar 20 00:11:58 crc kubenswrapper[5106]: I0320 00:11:58.558450 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hdf7z_0d49bd21-508b-4161-9bef-e0bad55ee83b/kube-multus-additional-cni-plugins/0.log" Mar 20 00:11:58 crc kubenswrapper[5106]: I0320 00:11:58.558981 5106 generic.go:358] "Generic (PLEG): container finished" podID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" exitCode=137 Mar 20 00:11:58 crc kubenswrapper[5106]: I0320 00:11:58.559031 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" event={"ID":"0d49bd21-508b-4161-9bef-e0bad55ee83b","Type":"ContainerDied","Data":"6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e"} Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.134082 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566092-6knrz"] Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.219784 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566092-6knrz"] Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.219937 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.222859 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.223061 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.223239 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.356774 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9t5r\" (UniqueName: \"kubernetes.io/projected/92c58c24-f3dc-45d1-bf1f-1a679ae95553-kube-api-access-d9t5r\") pod \"auto-csr-approver-29566092-6knrz\" (UID: \"92c58c24-f3dc-45d1-bf1f-1a679ae95553\") " pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.458473 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9t5r\" (UniqueName: \"kubernetes.io/projected/92c58c24-f3dc-45d1-bf1f-1a679ae95553-kube-api-access-d9t5r\") pod \"auto-csr-approver-29566092-6knrz\" (UID: \"92c58c24-f3dc-45d1-bf1f-1a679ae95553\") " pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.476367 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9t5r\" (UniqueName: \"kubernetes.io/projected/92c58c24-f3dc-45d1-bf1f-1a679ae95553-kube-api-access-d9t5r\") pod \"auto-csr-approver-29566092-6knrz\" (UID: \"92c58c24-f3dc-45d1-bf1f-1a679ae95553\") " pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.511541 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-vsgrz" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.535249 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.884666 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b444678cb-qz52g"] Mar 20 00:12:00 crc kubenswrapper[5106]: I0320 00:12:00.894754 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv"] Mar 20 00:12:01 crc kubenswrapper[5106]: I0320 00:12:01.429016 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:12:02 crc kubenswrapper[5106]: I0320 00:12:02.653469 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:02 crc kubenswrapper[5106]: I0320 00:12:02.653785 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:05 crc kubenswrapper[5106]: E0320 00:12:05.249970 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e is running failed: container process not found" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:12:05 crc kubenswrapper[5106]: E0320 00:12:05.250556 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e is running failed: container process not found" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:12:05 crc kubenswrapper[5106]: E0320 00:12:05.251156 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e is running failed: container process not found" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 00:12:05 crc kubenswrapper[5106]: E0320 00:12:05.251200 5106 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Mar 20 00:12:05 crc kubenswrapper[5106]: I0320 00:12:05.260291 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Mar 20 00:12:06 crc kubenswrapper[5106]: I0320 00:12:06.203667 5106 ???:1] "http: TLS handshake error from 192.168.126.11:33644: no serving certificate available for the kubelet" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.055379 5106 scope.go:117] "RemoveContainer" containerID="b52828f9dc5580c60b2d55e439ac6b138baf5f0e19972535af21ac7d694359f8" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.138953 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-hdf7z_0d49bd21-508b-4161-9bef-e0bad55ee83b/kube-multus-additional-cni-plugins/0.log" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.139035 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.285698 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d49bd21-508b-4161-9bef-e0bad55ee83b-cni-sysctl-allowlist\") pod \"0d49bd21-508b-4161-9bef-e0bad55ee83b\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.286285 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d49bd21-508b-4161-9bef-e0bad55ee83b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "0d49bd21-508b-4161-9bef-e0bad55ee83b" (UID: "0d49bd21-508b-4161-9bef-e0bad55ee83b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.286394 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7vf4\" (UniqueName: \"kubernetes.io/projected/0d49bd21-508b-4161-9bef-e0bad55ee83b-kube-api-access-q7vf4\") pod \"0d49bd21-508b-4161-9bef-e0bad55ee83b\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.286499 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0d49bd21-508b-4161-9bef-e0bad55ee83b-ready\") pod \"0d49bd21-508b-4161-9bef-e0bad55ee83b\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.287142 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d49bd21-508b-4161-9bef-e0bad55ee83b-tuning-conf-dir\") pod \"0d49bd21-508b-4161-9bef-e0bad55ee83b\" (UID: \"0d49bd21-508b-4161-9bef-e0bad55ee83b\") " Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.287540 5106 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/0d49bd21-508b-4161-9bef-e0bad55ee83b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.287572 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d49bd21-508b-4161-9bef-e0bad55ee83b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "0d49bd21-508b-4161-9bef-e0bad55ee83b" (UID: "0d49bd21-508b-4161-9bef-e0bad55ee83b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.292671 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d49bd21-508b-4161-9bef-e0bad55ee83b-ready" (OuterVolumeSpecName: "ready") pod "0d49bd21-508b-4161-9bef-e0bad55ee83b" (UID: "0d49bd21-508b-4161-9bef-e0bad55ee83b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.297082 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d49bd21-508b-4161-9bef-e0bad55ee83b-kube-api-access-q7vf4" (OuterVolumeSpecName: "kube-api-access-q7vf4") pod "0d49bd21-508b-4161-9bef-e0bad55ee83b" (UID: "0d49bd21-508b-4161-9bef-e0bad55ee83b"). InnerVolumeSpecName "kube-api-access-q7vf4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.389017 5106 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/0d49bd21-508b-4161-9bef-e0bad55ee83b-ready\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.389065 5106 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0d49bd21-508b-4161-9bef-e0bad55ee83b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.389085 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q7vf4\" (UniqueName: \"kubernetes.io/projected/0d49bd21-508b-4161-9bef-e0bad55ee83b-kube-api-access-q7vf4\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.526201 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b444678cb-qz52g"] Mar 20 00:12:07 crc kubenswrapper[5106]: W0320 00:12:07.541040 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a8eb0d5_ca89_4fd4_87a0_e25a90602c96.slice/crio-014a78041b9b206571a93993b926dfce234cb2d0c9aa9afb80255dd5b58c6650 WatchSource:0}: Error finding container 014a78041b9b206571a93993b926dfce234cb2d0c9aa9afb80255dd5b58c6650: Status 404 returned error can't find the container with id 014a78041b9b206571a93993b926dfce234cb2d0c9aa9afb80255dd5b58c6650 Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.620618 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv"] Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.643786 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566092-6knrz"] Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.649759 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerStarted","Data":"9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.660367 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerStarted","Data":"27732b6ace7382979c7097798c075f006cc505832366461f4ed51a505bac19ea"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.664981 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerStarted","Data":"e55a45a6b96fc2a721a626475ffbe15c1b8f6278ea945dd39860df902f563fde"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.667138 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerStarted","Data":"41764deaaeaac256e62beb2aefa1d9c711ab539ce0f3c1f6e6319d8008e2abe9"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.690104 5106 generic.go:358] "Generic (PLEG): container finished" podID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerID="fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6" exitCode=0 Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.690261 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerDied","Data":"fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.708785 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" event={"ID":"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96","Type":"ContainerStarted","Data":"014a78041b9b206571a93993b926dfce234cb2d0c9aa9afb80255dd5b58c6650"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.741300 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.741668 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-hdf7z" event={"ID":"0d49bd21-508b-4161-9bef-e0bad55ee83b","Type":"ContainerDied","Data":"66c14bfa43a92f0157c21f210ed35caf0599532f61b59aef8474a92095f9586f"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.741705 5106 scope.go:117] "RemoveContainer" containerID="6253a568022f7d0898b2f5a7e03ad9ad668883248b75ce40850882ee11ea8b5e" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.785547 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ss8gd" event={"ID":"9662276f-9936-4ed0-a464-c509bbaaa7a0","Type":"ContainerStarted","Data":"2d4dc89d358094b5b5f5d9f07c4e65e5e5669986967f28c1193be7e33c56d250"} Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.786596 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.786655 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.786691 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:07 crc kubenswrapper[5106]: I0320 00:12:07.794101 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerStarted","Data":"9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.001387 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hdf7z"] Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.014436 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-hdf7z"] Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.806259 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566092-6knrz" event={"ID":"92c58c24-f3dc-45d1-bf1f-1a679ae95553","Type":"ContainerStarted","Data":"0a7aa24cb916c6210458b00dcaca4ee807c4c8d280ab3786ab6fc8e7b0f9fbfb"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.812641 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" event={"ID":"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96","Type":"ContainerStarted","Data":"40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.812838 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" podUID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" containerName="controller-manager" containerID="cri-o://40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f" gracePeriod=30 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.814032 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.839238 5106 generic.go:358] "Generic (PLEG): container finished" podID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerID="cd2c49a06d7db7b3180a483e2cd8c0785966df52f08ced095e84714b5c6239a9" exitCode=0 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.839332 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7cgp" event={"ID":"55a6a924-c50c-40e9-bce1-a4a8a636c5e4","Type":"ContainerDied","Data":"cd2c49a06d7db7b3180a483e2cd8c0785966df52f08ced095e84714b5c6239a9"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.845252 5106 generic.go:358] "Generic (PLEG): container finished" podID="a342c56e-aefd-443c-b37a-af158660104d" containerID="54e8589037baf1553ef7aca76c9cc210518edbf476ee5f13d443a36cff66c527" exitCode=0 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.845392 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgd7w" event={"ID":"a342c56e-aefd-443c-b37a-af158660104d","Type":"ContainerDied","Data":"54e8589037baf1553ef7aca76c9cc210518edbf476ee5f13d443a36cff66c527"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.863843 5106 generic.go:358] "Generic (PLEG): container finished" podID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerID="9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898" exitCode=0 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.864041 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerDied","Data":"9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.864080 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerStarted","Data":"e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.867284 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" event={"ID":"d83e6fda-89b8-4659-8dfc-04d8a2c10605","Type":"ContainerStarted","Data":"12fec09068248ec8fc4f235686648aeaf0963866cb3861a35b89b26e0b12f71b"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.867313 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" event={"ID":"d83e6fda-89b8-4659-8dfc-04d8a2c10605","Type":"ContainerStarted","Data":"16bb2f4eea46bbe46fac292faf27432b49eb73b59de9ed40050dc111d528243b"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.867453 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" podUID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" containerName="route-controller-manager" containerID="cri-o://12fec09068248ec8fc4f235686648aeaf0963866cb3861a35b89b26e0b12f71b" gracePeriod=30 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.867939 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.866153 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" podStartSLOduration=28.866126539 podStartE2EDuration="28.866126539s" podCreationTimestamp="2026-03-20 00:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:08.857823474 +0000 UTC m=+183.291557528" watchObservedRunningTime="2026-03-20 00:12:08.866126539 +0000 UTC m=+183.299860593" Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.881919 5106 patch_prober.go:28] interesting pod/controller-manager-5b444678cb-qz52g container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:48078->10.217.0.56:8443: read: connection reset by peer" start-of-body= Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.881989 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" podUID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:48078->10.217.0.56:8443: read: connection reset by peer" Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.925520 5106 generic.go:358] "Generic (PLEG): container finished" podID="2902d42b-f752-4b77-9aef-994def9350ba" containerID="9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d" exitCode=0 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.926062 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerDied","Data":"9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.944168 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bpzzz" podStartSLOduration=5.67307088 podStartE2EDuration="39.944127517s" podCreationTimestamp="2026-03-20 00:11:29 +0000 UTC" firstStartedPulling="2026-03-20 00:11:32.794286252 +0000 UTC m=+147.228020306" lastFinishedPulling="2026-03-20 00:12:07.065342889 +0000 UTC m=+181.499076943" observedRunningTime="2026-03-20 00:12:08.941466208 +0000 UTC m=+183.375200282" watchObservedRunningTime="2026-03-20 00:12:08.944127517 +0000 UTC m=+183.377861571" Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.948319 5106 generic.go:358] "Generic (PLEG): container finished" podID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerID="41764deaaeaac256e62beb2aefa1d9c711ab539ce0f3c1f6e6319d8008e2abe9" exitCode=0 Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.948418 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerDied","Data":"41764deaaeaac256e62beb2aefa1d9c711ab539ce0f3c1f6e6319d8008e2abe9"} Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.963797 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" podStartSLOduration=27.963779675 podStartE2EDuration="27.963779675s" podCreationTimestamp="2026-03-20 00:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:08.961879826 +0000 UTC m=+183.395613900" watchObservedRunningTime="2026-03-20 00:12:08.963779675 +0000 UTC m=+183.397513729" Mar 20 00:12:08 crc kubenswrapper[5106]: I0320 00:12:08.996994 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerStarted","Data":"b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660"} Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.024263 5106 patch_prober.go:28] interesting pod/route-controller-manager-64bfc6b5fd-xxnkv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:45806->10.217.0.55:8443: read: connection reset by peer" start-of-body= Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.024346 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" podUID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": read tcp 10.217.0.2:45806->10.217.0.55:8443: read: connection reset by peer" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.175633 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" path="/var/lib/kubelet/pods/0d49bd21-508b-4161-9bef-e0bad55ee83b/volumes" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.395812 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.396231 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.655272 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-5b444678cb-qz52g_7a8eb0d5-ca89-4fd4-87a0-e25a90602c96/controller-manager/0.log" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.655355 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.683213 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7675cb6858-lssg4"] Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.683866 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.683882 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.683896 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" containerName="controller-manager" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.683902 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" containerName="controller-manager" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.683992 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d49bd21-508b-4161-9bef-e0bad55ee83b" containerName="kube-multus-additional-cni-plugins" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.684004 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" containerName="controller-manager" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.782244 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.782349 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.807279 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwx2n" podStartSLOduration=5.8579864189999995 podStartE2EDuration="38.807262264s" podCreationTimestamp="2026-03-20 00:11:31 +0000 UTC" firstStartedPulling="2026-03-20 00:11:34.132219011 +0000 UTC m=+148.565953065" lastFinishedPulling="2026-03-20 00:12:07.081494856 +0000 UTC m=+181.515228910" observedRunningTime="2026-03-20 00:12:09.804156913 +0000 UTC m=+184.237890957" watchObservedRunningTime="2026-03-20 00:12:09.807262264 +0000 UTC m=+184.240996318" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838029 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-tmp\") pod \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838541 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66wk\" (UniqueName: \"kubernetes.io/projected/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-kube-api-access-p66wk\") pod \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838670 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-serving-cert\") pod \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838664 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-tmp" (OuterVolumeSpecName: "tmp") pod "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" (UID: "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838699 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-proxy-ca-bundles\") pod \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838783 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-client-ca\") pod \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.838820 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-config\") pod \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\" (UID: \"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96\") " Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.839033 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.839870 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-config" (OuterVolumeSpecName: "config") pod "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" (UID: "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.839976 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" (UID: "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.843186 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" (UID: "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.847162 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" (UID: "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.849016 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-kube-api-access-p66wk" (OuterVolumeSpecName: "kube-api-access-p66wk") pod "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" (UID: "7a8eb0d5-ca89-4fd4-87a0-e25a90602c96"). InnerVolumeSpecName "kube-api-access-p66wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.883246 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7675cb6858-lssg4"] Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.883326 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.940517 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p66wk\" (UniqueName: \"kubernetes.io/projected/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-kube-api-access-p66wk\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.940554 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.940565 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.940588 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:09 crc kubenswrapper[5106]: I0320 00:12:09.940615 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.011818 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-64bfc6b5fd-xxnkv_d83e6fda-89b8-4659-8dfc-04d8a2c10605/route-controller-manager/0.log" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.011857 5106 generic.go:358] "Generic (PLEG): container finished" podID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" containerID="12fec09068248ec8fc4f235686648aeaf0963866cb3861a35b89b26e0b12f71b" exitCode=255 Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.011918 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" event={"ID":"d83e6fda-89b8-4659-8dfc-04d8a2c10605","Type":"ContainerDied","Data":"12fec09068248ec8fc4f235686648aeaf0963866cb3861a35b89b26e0b12f71b"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.016310 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerStarted","Data":"21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.019135 5106 generic.go:358] "Generic (PLEG): container finished" podID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerID="e55a45a6b96fc2a721a626475ffbe15c1b8f6278ea945dd39860df902f563fde" exitCode=0 Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.019251 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerDied","Data":"e55a45a6b96fc2a721a626475ffbe15c1b8f6278ea945dd39860df902f563fde"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.027889 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-5b444678cb-qz52g_7a8eb0d5-ca89-4fd4-87a0-e25a90602c96/controller-manager/0.log" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.027957 5106 generic.go:358] "Generic (PLEG): container finished" podID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" containerID="40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f" exitCode=255 Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.028253 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.029095 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" event={"ID":"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96","Type":"ContainerDied","Data":"40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.029146 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b444678cb-qz52g" event={"ID":"7a8eb0d5-ca89-4fd4-87a0-e25a90602c96","Type":"ContainerDied","Data":"014a78041b9b206571a93993b926dfce234cb2d0c9aa9afb80255dd5b58c6650"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.029166 5106 scope.go:117] "RemoveContainer" containerID="40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.042152 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c318e5ed-5262-4842-ba76-4ac168e42455-tmp\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.042205 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-config\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.042227 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkzf\" (UniqueName: \"kubernetes.io/projected/c318e5ed-5262-4842-ba76-4ac168e42455-kube-api-access-fpkzf\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.042250 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c318e5ed-5262-4842-ba76-4ac168e42455-serving-cert\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.042271 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-client-ca\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.042314 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-proxy-ca-bundles\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.047691 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7cgp" event={"ID":"55a6a924-c50c-40e9-bce1-a4a8a636c5e4","Type":"ContainerStarted","Data":"01731054a441977aa29c5c757e1e4d2fca5b5d800e2f51b00cf428500fb2a145"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.048319 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qtqct" podStartSLOduration=7.776460303 podStartE2EDuration="42.048301539s" podCreationTimestamp="2026-03-20 00:11:28 +0000 UTC" firstStartedPulling="2026-03-20 00:11:32.82707392 +0000 UTC m=+147.260807974" lastFinishedPulling="2026-03-20 00:12:07.098915156 +0000 UTC m=+181.532649210" observedRunningTime="2026-03-20 00:12:10.045759493 +0000 UTC m=+184.479493557" watchObservedRunningTime="2026-03-20 00:12:10.048301539 +0000 UTC m=+184.482035593" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.059933 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgd7w" event={"ID":"a342c56e-aefd-443c-b37a-af158660104d","Type":"ContainerStarted","Data":"ff5d5946faa8a5808f6ed436c5ff94f8bf2cfef903efa2c9cacfa328fb84fb45"} Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.082307 5106 scope.go:117] "RemoveContainer" containerID="40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f" Mar 20 00:12:10 crc kubenswrapper[5106]: E0320 00:12:10.086938 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f\": container with ID starting with 40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f not found: ID does not exist" containerID="40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.086987 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f"} err="failed to get container status \"40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f\": rpc error: code = NotFound desc = could not find container \"40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f\": container with ID starting with 40c303dc307a7865dec3e93e928d1d06efaf4d3653de674d963b5a4a32fad69f not found: ID does not exist" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.110025 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c7cgp" podStartSLOduration=6.128393964 podStartE2EDuration="39.110007555s" podCreationTimestamp="2026-03-20 00:11:31 +0000 UTC" firstStartedPulling="2026-03-20 00:11:34.178448847 +0000 UTC m=+148.612182891" lastFinishedPulling="2026-03-20 00:12:07.160062428 +0000 UTC m=+181.593796482" observedRunningTime="2026-03-20 00:12:10.106182146 +0000 UTC m=+184.539916200" watchObservedRunningTime="2026-03-20 00:12:10.110007555 +0000 UTC m=+184.543741609" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.120738 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b444678cb-qz52g"] Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.123245 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b444678cb-qz52g"] Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.145799 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c318e5ed-5262-4842-ba76-4ac168e42455-serving-cert\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.146177 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-client-ca\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.146302 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-proxy-ca-bundles\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.146480 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c318e5ed-5262-4842-ba76-4ac168e42455-tmp\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.146575 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-config\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.146672 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkzf\" (UniqueName: \"kubernetes.io/projected/c318e5ed-5262-4842-ba76-4ac168e42455-kube-api-access-fpkzf\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.163161 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-client-ca\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.164405 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c318e5ed-5262-4842-ba76-4ac168e42455-tmp\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.165351 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-config\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.183964 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-proxy-ca-bundles\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.188178 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c318e5ed-5262-4842-ba76-4ac168e42455-serving-cert\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.193032 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkzf\" (UniqueName: \"kubernetes.io/projected/c318e5ed-5262-4842-ba76-4ac168e42455-kube-api-access-fpkzf\") pod \"controller-manager-7675cb6858-lssg4\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.319126 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.486329 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.489028 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.489103 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.527209 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mgd7w" podStartSLOduration=7.156745969 podStartE2EDuration="41.527173195s" podCreationTimestamp="2026-03-20 00:11:29 +0000 UTC" firstStartedPulling="2026-03-20 00:11:32.789791196 +0000 UTC m=+147.223525250" lastFinishedPulling="2026-03-20 00:12:07.160218422 +0000 UTC m=+181.593952476" observedRunningTime="2026-03-20 00:12:10.518176253 +0000 UTC m=+184.951910317" watchObservedRunningTime="2026-03-20 00:12:10.527173195 +0000 UTC m=+184.960907249" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.781973 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.785020 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.789055 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.789700 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.872239 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7675cb6858-lssg4"] Mar 20 00:12:10 crc kubenswrapper[5106]: W0320 00:12:10.922688 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc318e5ed_5262_4842_ba76_4ac168e42455.slice/crio-d9bf899886d3c6caec0ab4bb6b22319a909984f66ce1d24a119087941fc691f5 WatchSource:0}: Error finding container d9bf899886d3c6caec0ab4bb6b22319a909984f66ce1d24a119087941fc691f5: Status 404 returned error can't find the container with id d9bf899886d3c6caec0ab4bb6b22319a909984f66ce1d24a119087941fc691f5 Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.962357 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68151e8d-2823-4f28-86ce-4a7508597ece-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:10 crc kubenswrapper[5106]: I0320 00:12:10.962469 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68151e8d-2823-4f28-86ce-4a7508597ece-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.045313 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-64bfc6b5fd-xxnkv_d83e6fda-89b8-4659-8dfc-04d8a2c10605/route-controller-manager/0.log" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.045391 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.063435 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68151e8d-2823-4f28-86ce-4a7508597ece-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.063533 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68151e8d-2823-4f28-86ce-4a7508597ece-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.063949 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68151e8d-2823-4f28-86ce-4a7508597ece-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.079233 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" event={"ID":"c318e5ed-5262-4842-ba76-4ac168e42455","Type":"ContainerStarted","Data":"d9bf899886d3c6caec0ab4bb6b22319a909984f66ce1d24a119087941fc691f5"} Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.084210 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4"] Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.084816 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" containerName="route-controller-manager" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.084835 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" containerName="route-controller-manager" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.084945 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" containerName="route-controller-manager" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.091112 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-64bfc6b5fd-xxnkv_d83e6fda-89b8-4659-8dfc-04d8a2c10605/route-controller-manager/0.log" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.094544 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68151e8d-2823-4f28-86ce-4a7508597ece-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.094601 5106 generic.go:358] "Generic (PLEG): container finished" podID="862d0f24-7d93-4dd5-a664-398213a26a24" containerID="27732b6ace7382979c7097798c075f006cc505832366461f4ed51a505bac19ea" exitCode=0 Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.117978 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.166163 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d83e6fda-89b8-4659-8dfc-04d8a2c10605-tmp\") pod \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.166212 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwp2s\" (UniqueName: \"kubernetes.io/projected/d83e6fda-89b8-4659-8dfc-04d8a2c10605-kube-api-access-kwp2s\") pod \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.166280 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83e6fda-89b8-4659-8dfc-04d8a2c10605-serving-cert\") pod \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.166379 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-client-ca\") pod \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.166420 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-config\") pod \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\" (UID: \"d83e6fda-89b8-4659-8dfc-04d8a2c10605\") " Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.166680 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83e6fda-89b8-4659-8dfc-04d8a2c10605-tmp" (OuterVolumeSpecName: "tmp") pod "d83e6fda-89b8-4659-8dfc-04d8a2c10605" (UID: "d83e6fda-89b8-4659-8dfc-04d8a2c10605"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.167454 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-client-ca" (OuterVolumeSpecName: "client-ca") pod "d83e6fda-89b8-4659-8dfc-04d8a2c10605" (UID: "d83e6fda-89b8-4659-8dfc-04d8a2c10605"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.167990 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-config" (OuterVolumeSpecName: "config") pod "d83e6fda-89b8-4659-8dfc-04d8a2c10605" (UID: "d83e6fda-89b8-4659-8dfc-04d8a2c10605"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.170496 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83e6fda-89b8-4659-8dfc-04d8a2c10605-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d83e6fda-89b8-4659-8dfc-04d8a2c10605" (UID: "d83e6fda-89b8-4659-8dfc-04d8a2c10605"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.175208 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.175236 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4"] Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.175271 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv" event={"ID":"d83e6fda-89b8-4659-8dfc-04d8a2c10605","Type":"ContainerDied","Data":"16bb2f4eea46bbe46fac292faf27432b49eb73b59de9ed40050dc111d528243b"} Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.175764 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.176837 5106 scope.go:117] "RemoveContainer" containerID="12fec09068248ec8fc4f235686648aeaf0963866cb3861a35b89b26e0b12f71b" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.185052 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83e6fda-89b8-4659-8dfc-04d8a2c10605-kube-api-access-kwp2s" (OuterVolumeSpecName: "kube-api-access-kwp2s") pod "d83e6fda-89b8-4659-8dfc-04d8a2c10605" (UID: "d83e6fda-89b8-4659-8dfc-04d8a2c10605"). InnerVolumeSpecName "kube-api-access-kwp2s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.198596 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a8eb0d5-ca89-4fd4-87a0-e25a90602c96" path="/var/lib/kubelet/pods/7a8eb0d5-ca89-4fd4-87a0-e25a90602c96/volumes" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.199898 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerDied","Data":"27732b6ace7382979c7097798c075f006cc505832366461f4ed51a505bac19ea"} Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.199936 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerStarted","Data":"38cc5ddeb422e8c1418b76ad9e17d4e43f7de62f1c49bdead6baf1d2bf7c2d6f"} Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.199950 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerStarted","Data":"b64a5b7ab5c8ff5b7beca98285f941dc637f05d4da3efb57d3c83e6bffac4365"} Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.246053 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b62gq" podStartSLOduration=7.961213019 podStartE2EDuration="42.24603756s" podCreationTimestamp="2026-03-20 00:11:29 +0000 UTC" firstStartedPulling="2026-03-20 00:11:32.867401974 +0000 UTC m=+147.301136028" lastFinishedPulling="2026-03-20 00:12:07.152226515 +0000 UTC m=+181.585960569" observedRunningTime="2026-03-20 00:12:11.212364639 +0000 UTC m=+185.646098703" watchObservedRunningTime="2026-03-20 00:12:11.24603756 +0000 UTC m=+185.679771614" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269148 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc1bf81e-8b70-4151-b76c-2904905725f6-serving-cert\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269208 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgwd\" (UniqueName: \"kubernetes.io/projected/fc1bf81e-8b70-4151-b76c-2904905725f6-kube-api-access-kzgwd\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269246 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-client-ca\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269335 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fc1bf81e-8b70-4151-b76c-2904905725f6-tmp\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269360 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-config\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269396 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269408 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d83e6fda-89b8-4659-8dfc-04d8a2c10605-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269418 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwp2s\" (UniqueName: \"kubernetes.io/projected/d83e6fda-89b8-4659-8dfc-04d8a2c10605-kube-api-access-kwp2s\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269430 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83e6fda-89b8-4659-8dfc-04d8a2c10605-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.269446 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83e6fda-89b8-4659-8dfc-04d8a2c10605-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.300382 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fvw5w" podStartSLOduration=7.372948501 podStartE2EDuration="39.300365305s" podCreationTimestamp="2026-03-20 00:11:32 +0000 UTC" firstStartedPulling="2026-03-20 00:11:35.232209473 +0000 UTC m=+149.665943527" lastFinishedPulling="2026-03-20 00:12:07.159626277 +0000 UTC m=+181.593360331" observedRunningTime="2026-03-20 00:12:11.299441371 +0000 UTC m=+185.733175445" watchObservedRunningTime="2026-03-20 00:12:11.300365305 +0000 UTC m=+185.734099359" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.371755 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fc1bf81e-8b70-4151-b76c-2904905725f6-tmp\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.371854 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-config\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.371881 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc1bf81e-8b70-4151-b76c-2904905725f6-serving-cert\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.372278 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fc1bf81e-8b70-4151-b76c-2904905725f6-tmp\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.373098 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-config\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.373794 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgwd\" (UniqueName: \"kubernetes.io/projected/fc1bf81e-8b70-4151-b76c-2904905725f6-kube-api-access-kzgwd\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.374769 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-client-ca\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.375485 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-client-ca\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.385118 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc1bf81e-8b70-4151-b76c-2904905725f6-serving-cert\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.407214 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgwd\" (UniqueName: \"kubernetes.io/projected/fc1bf81e-8b70-4151-b76c-2904905725f6-kube-api-access-kzgwd\") pod \"route-controller-manager-84cc6d7676-hvjf4\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.506305 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv"] Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.513106 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64bfc6b5fd-xxnkv"] Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.513911 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.532364 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.532397 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.587127 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Mar 20 00:12:11 crc kubenswrapper[5106]: W0320 00:12:11.645331 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod68151e8d_2823_4f28_86ce_4a7508597ece.slice/crio-c49b819bfecb2a126ff7a60880ba0845db75ed7c5b83c12b8a648a36d4ba82ee WatchSource:0}: Error finding container c49b819bfecb2a126ff7a60880ba0845db75ed7c5b83c12b8a648a36d4ba82ee: Status 404 returned error can't find the container with id c49b819bfecb2a126ff7a60880ba0845db75ed7c5b83c12b8a648a36d4ba82ee Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.715650 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bpzzz" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="registry-server" probeResult="failure" output=< Mar 20 00:12:11 crc kubenswrapper[5106]: timeout: failed to connect service ":50051" within 1s Mar 20 00:12:11 crc kubenswrapper[5106]: > Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.919191 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4"] Mar 20 00:12:11 crc kubenswrapper[5106]: W0320 00:12:11.924137 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1bf81e_8b70_4151_b76c_2904905725f6.slice/crio-d7097d7f63a2107797dcb28783adbc24eeb113db45f3c112b66243df73602692 WatchSource:0}: Error finding container d7097d7f63a2107797dcb28783adbc24eeb113db45f3c112b66243df73602692: Status 404 returned error can't find the container with id d7097d7f63a2107797dcb28783adbc24eeb113db45f3c112b66243df73602692 Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.942013 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:12:11 crc kubenswrapper[5106]: I0320 00:12:11.942058 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.118501 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"68151e8d-2823-4f28-86ce-4a7508597ece","Type":"ContainerStarted","Data":"c49b819bfecb2a126ff7a60880ba0845db75ed7c5b83c12b8a648a36d4ba82ee"} Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.119670 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" event={"ID":"fc1bf81e-8b70-4151-b76c-2904905725f6","Type":"ContainerStarted","Data":"d7097d7f63a2107797dcb28783adbc24eeb113db45f3c112b66243df73602692"} Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.120859 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" event={"ID":"c318e5ed-5262-4842-ba76-4ac168e42455","Type":"ContainerStarted","Data":"6548209dfd9f24abed7e4a8a5258abc48c5a97ed6dcf781d6cb9ddb9ebb12089"} Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.511990 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" podStartSLOduration=12.511970296 podStartE2EDuration="12.511970296s" podCreationTimestamp="2026-03-20 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:12.510376705 +0000 UTC m=+186.944110769" watchObservedRunningTime="2026-03-20 00:12:12.511970296 +0000 UTC m=+186.945704350" Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.598051 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c7cgp" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="registry-server" probeResult="failure" output=< Mar 20 00:12:12 crc kubenswrapper[5106]: timeout: failed to connect service ":50051" within 1s Mar 20 00:12:12 crc kubenswrapper[5106]: > Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.652956 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.653021 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.910429 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:12:12 crc kubenswrapper[5106]: I0320 00:12:12.910728 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:12:13 crc kubenswrapper[5106]: I0320 00:12:13.009792 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vwx2n" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="registry-server" probeResult="failure" output=< Mar 20 00:12:13 crc kubenswrapper[5106]: timeout: failed to connect service ":50051" within 1s Mar 20 00:12:13 crc kubenswrapper[5106]: > Mar 20 00:12:13 crc kubenswrapper[5106]: I0320 00:12:13.135677 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerStarted","Data":"439b1f9a519fd91ae3c6376b61333f9a6a63c2c47246c6af7030d8f416aa0842"} Mar 20 00:12:13 crc kubenswrapper[5106]: I0320 00:12:13.135719 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:13 crc kubenswrapper[5106]: I0320 00:12:13.140471 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:13 crc kubenswrapper[5106]: I0320 00:12:13.156349 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nv56p" podStartSLOduration=9.226194691 podStartE2EDuration="41.156331544s" podCreationTimestamp="2026-03-20 00:11:32 +0000 UTC" firstStartedPulling="2026-03-20 00:11:35.227386869 +0000 UTC m=+149.661120923" lastFinishedPulling="2026-03-20 00:12:07.157523722 +0000 UTC m=+181.591257776" observedRunningTime="2026-03-20 00:12:13.153532111 +0000 UTC m=+187.587266165" watchObservedRunningTime="2026-03-20 00:12:13.156331544 +0000 UTC m=+187.590065588" Mar 20 00:12:13 crc kubenswrapper[5106]: I0320 00:12:13.168097 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83e6fda-89b8-4659-8dfc-04d8a2c10605" path="/var/lib/kubelet/pods/d83e6fda-89b8-4659-8dfc-04d8a2c10605/volumes" Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.021401 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fvw5w" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="registry-server" probeResult="failure" output=< Mar 20 00:12:14 crc kubenswrapper[5106]: timeout: failed to connect service ":50051" within 1s Mar 20 00:12:14 crc kubenswrapper[5106]: > Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.171298 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"68151e8d-2823-4f28-86ce-4a7508597ece","Type":"ContainerStarted","Data":"efe0afa107619bd54f72a773ecb4ff825833c6da5931d499a498f36be9fa5b62"} Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.181962 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" event={"ID":"fc1bf81e-8b70-4151-b76c-2904905725f6","Type":"ContainerStarted","Data":"23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d"} Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.182010 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.189553 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.193526 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=4.193506272 podStartE2EDuration="4.193506272s" podCreationTimestamp="2026-03-20 00:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:14.192899377 +0000 UTC m=+188.626633431" watchObservedRunningTime="2026-03-20 00:12:14.193506272 +0000 UTC m=+188.627240326" Mar 20 00:12:14 crc kubenswrapper[5106]: I0320 00:12:14.213837 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" podStartSLOduration=14.213818708 podStartE2EDuration="14.213818708s" podCreationTimestamp="2026-03-20 00:12:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:14.211476227 +0000 UTC m=+188.645210281" watchObservedRunningTime="2026-03-20 00:12:14.213818708 +0000 UTC m=+188.647552762" Mar 20 00:12:15 crc kubenswrapper[5106]: I0320 00:12:15.200690 5106 generic.go:358] "Generic (PLEG): container finished" podID="68151e8d-2823-4f28-86ce-4a7508597ece" containerID="efe0afa107619bd54f72a773ecb4ff825833c6da5931d499a498f36be9fa5b62" exitCode=0 Mar 20 00:12:15 crc kubenswrapper[5106]: I0320 00:12:15.202261 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"68151e8d-2823-4f28-86ce-4a7508597ece","Type":"ContainerDied","Data":"efe0afa107619bd54f72a773ecb4ff825833c6da5931d499a498f36be9fa5b62"} Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.319214 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.835728 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.836131 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.859975 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-var-lock\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.860313 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kube-api-access\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.860368 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kubelet-dir\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.939157 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.961123 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68151e8d-2823-4f28-86ce-4a7508597ece-kubelet-dir\") pod \"68151e8d-2823-4f28-86ce-4a7508597ece\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.961427 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68151e8d-2823-4f28-86ce-4a7508597ece-kube-api-access\") pod \"68151e8d-2823-4f28-86ce-4a7508597ece\" (UID: \"68151e8d-2823-4f28-86ce-4a7508597ece\") " Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.961286 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68151e8d-2823-4f28-86ce-4a7508597ece-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "68151e8d-2823-4f28-86ce-4a7508597ece" (UID: "68151e8d-2823-4f28-86ce-4a7508597ece"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.961853 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-var-lock\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.961946 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kube-api-access\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.962019 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-var-lock\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.962162 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kubelet-dir\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.962313 5106 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68151e8d-2823-4f28-86ce-4a7508597ece-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.962426 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kubelet-dir\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.974085 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68151e8d-2823-4f28-86ce-4a7508597ece-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "68151e8d-2823-4f28-86ce-4a7508597ece" (UID: "68151e8d-2823-4f28-86ce-4a7508597ece"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:16 crc kubenswrapper[5106]: I0320 00:12:16.979525 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kube-api-access\") pod \"installer-12-crc\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:17 crc kubenswrapper[5106]: I0320 00:12:17.063807 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68151e8d-2823-4f28-86ce-4a7508597ece-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:17 crc kubenswrapper[5106]: I0320 00:12:17.215366 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"68151e8d-2823-4f28-86ce-4a7508597ece","Type":"ContainerDied","Data":"c49b819bfecb2a126ff7a60880ba0845db75ed7c5b83c12b8a648a36d4ba82ee"} Mar 20 00:12:17 crc kubenswrapper[5106]: I0320 00:12:17.215415 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c49b819bfecb2a126ff7a60880ba0845db75ed7c5b83c12b8a648a36d4ba82ee" Mar 20 00:12:17 crc kubenswrapper[5106]: I0320 00:12:17.215503 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Mar 20 00:12:17 crc kubenswrapper[5106]: I0320 00:12:17.223279 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:17 crc kubenswrapper[5106]: I0320 00:12:17.581680 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Mar 20 00:12:17 crc kubenswrapper[5106]: W0320 00:12:17.589923 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod316971ca_bb80_40a7_9f09_fe5ef9fb388b.slice/crio-9e3c7c50b838d14217f20b16e207796169184cf23d1bedc0126a222ae5a7c739 WatchSource:0}: Error finding container 9e3c7c50b838d14217f20b16e207796169184cf23d1bedc0126a222ae5a7c739: Status 404 returned error can't find the container with id 9e3c7c50b838d14217f20b16e207796169184cf23d1bedc0126a222ae5a7c739 Mar 20 00:12:18 crc kubenswrapper[5106]: I0320 00:12:18.222645 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"316971ca-bb80-40a7-9f09-fe5ef9fb388b","Type":"ContainerStarted","Data":"9e3c7c50b838d14217f20b16e207796169184cf23d1bedc0126a222ae5a7c739"} Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.229533 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"316971ca-bb80-40a7-9f09-fe5ef9fb388b","Type":"ContainerStarted","Data":"f05fe504393f6117b964aa0776265f7d76f156c68b45e3ea8e008a8e50988f13"} Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.277400 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.277443 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.369778 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.443441 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.483937 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.677807 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.677852 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.718802 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.839734 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.839787 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:12:19 crc kubenswrapper[5106]: I0320 00:12:19.875354 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:12:20 crc kubenswrapper[5106]: I0320 00:12:20.274069 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=4.274046007 podStartE2EDuration="4.274046007s" podCreationTimestamp="2026-03-20 00:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:20.264371937 +0000 UTC m=+194.698106001" watchObservedRunningTime="2026-03-20 00:12:20.274046007 +0000 UTC m=+194.707780071" Mar 20 00:12:20 crc kubenswrapper[5106]: I0320 00:12:20.281758 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:12:20 crc kubenswrapper[5106]: I0320 00:12:20.286979 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:12:20 crc kubenswrapper[5106]: I0320 00:12:20.309840 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:12:20 crc kubenswrapper[5106]: I0320 00:12:20.490481 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:20 crc kubenswrapper[5106]: I0320 00:12:20.490856 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:21 crc kubenswrapper[5106]: I0320 00:12:21.398406 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b62gq"] Mar 20 00:12:21 crc kubenswrapper[5106]: I0320 00:12:21.572845 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:12:21 crc kubenswrapper[5106]: I0320 00:12:21.608948 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:12:21 crc kubenswrapper[5106]: I0320 00:12:21.980162 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.000547 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mgd7w"] Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.026142 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.245737 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b62gq" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="registry-server" containerID="cri-o://b64a5b7ab5c8ff5b7beca98285f941dc637f05d4da3efb57d3c83e6bffac4365" gracePeriod=2 Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.246206 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mgd7w" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="registry-server" containerID="cri-o://ff5d5946faa8a5808f6ed436c5ff94f8bf2cfef903efa2c9cacfa328fb84fb45" gracePeriod=2 Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.507621 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.507700 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.553049 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.652613 5106 patch_prober.go:28] interesting pod/downloads-747b44746d-ss8gd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.652688 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ss8gd" podUID="9662276f-9936-4ed0-a464-c509bbaaa7a0" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.951134 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:12:22 crc kubenswrapper[5106]: I0320 00:12:22.994550 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.255638 5106 generic.go:358] "Generic (PLEG): container finished" podID="a342c56e-aefd-443c-b37a-af158660104d" containerID="ff5d5946faa8a5808f6ed436c5ff94f8bf2cfef903efa2c9cacfa328fb84fb45" exitCode=0 Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.255708 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgd7w" event={"ID":"a342c56e-aefd-443c-b37a-af158660104d","Type":"ContainerDied","Data":"ff5d5946faa8a5808f6ed436c5ff94f8bf2cfef903efa2c9cacfa328fb84fb45"} Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.263071 5106 generic.go:358] "Generic (PLEG): container finished" podID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerID="b64a5b7ab5c8ff5b7beca98285f941dc637f05d4da3efb57d3c83e6bffac4365" exitCode=0 Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.263151 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerDied","Data":"b64a5b7ab5c8ff5b7beca98285f941dc637f05d4da3efb57d3c83e6bffac4365"} Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.302987 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.579925 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.665618 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-catalog-content\") pod \"a342c56e-aefd-443c-b37a-af158660104d\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.665709 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-utilities\") pod \"a342c56e-aefd-443c-b37a-af158660104d\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.665734 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmhrk\" (UniqueName: \"kubernetes.io/projected/a342c56e-aefd-443c-b37a-af158660104d-kube-api-access-xmhrk\") pod \"a342c56e-aefd-443c-b37a-af158660104d\" (UID: \"a342c56e-aefd-443c-b37a-af158660104d\") " Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.668266 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-utilities" (OuterVolumeSpecName: "utilities") pod "a342c56e-aefd-443c-b37a-af158660104d" (UID: "a342c56e-aefd-443c-b37a-af158660104d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.671292 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a342c56e-aefd-443c-b37a-af158660104d-kube-api-access-xmhrk" (OuterVolumeSpecName: "kube-api-access-xmhrk") pod "a342c56e-aefd-443c-b37a-af158660104d" (UID: "a342c56e-aefd-443c-b37a-af158660104d"). InnerVolumeSpecName "kube-api-access-xmhrk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.688869 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.743883 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a342c56e-aefd-443c-b37a-af158660104d" (UID: "a342c56e-aefd-443c-b37a-af158660104d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.766465 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-catalog-content\") pod \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.766515 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-utilities\") pod \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.766668 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md9td\" (UniqueName: \"kubernetes.io/projected/fe8416cb-a9a0-45bd-aec9-25549b0c4551-kube-api-access-md9td\") pod \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\" (UID: \"fe8416cb-a9a0-45bd-aec9-25549b0c4551\") " Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.766879 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.766894 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmhrk\" (UniqueName: \"kubernetes.io/projected/a342c56e-aefd-443c-b37a-af158660104d-kube-api-access-xmhrk\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.766903 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a342c56e-aefd-443c-b37a-af158660104d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.767684 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-utilities" (OuterVolumeSpecName: "utilities") pod "fe8416cb-a9a0-45bd-aec9-25549b0c4551" (UID: "fe8416cb-a9a0-45bd-aec9-25549b0c4551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.770057 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8416cb-a9a0-45bd-aec9-25549b0c4551-kube-api-access-md9td" (OuterVolumeSpecName: "kube-api-access-md9td") pod "fe8416cb-a9a0-45bd-aec9-25549b0c4551" (UID: "fe8416cb-a9a0-45bd-aec9-25549b0c4551"). InnerVolumeSpecName "kube-api-access-md9td". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.806999 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe8416cb-a9a0-45bd-aec9-25549b0c4551" (UID: "fe8416cb-a9a0-45bd-aec9-25549b0c4551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.868327 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-md9td\" (UniqueName: \"kubernetes.io/projected/fe8416cb-a9a0-45bd-aec9-25549b0c4551-kube-api-access-md9td\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.868365 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:23 crc kubenswrapper[5106]: I0320 00:12:23.868374 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe8416cb-a9a0-45bd-aec9-25549b0c4551-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.270892 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b62gq" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.270892 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b62gq" event={"ID":"fe8416cb-a9a0-45bd-aec9-25549b0c4551","Type":"ContainerDied","Data":"7907620905dd4d03808b26c0bb0cd14fcb2982ac052a90a6382c248b85850a66"} Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.271040 5106 scope.go:117] "RemoveContainer" containerID="b64a5b7ab5c8ff5b7beca98285f941dc637f05d4da3efb57d3c83e6bffac4365" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.273396 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mgd7w" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.273406 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mgd7w" event={"ID":"a342c56e-aefd-443c-b37a-af158660104d","Type":"ContainerDied","Data":"203e35c446c7602acc76bf478d5f444c12218e8409c46adfc347663d1c275dac"} Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.305073 5106 scope.go:117] "RemoveContainer" containerID="41764deaaeaac256e62beb2aefa1d9c711ab539ce0f3c1f6e6319d8008e2abe9" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.307927 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b62gq"] Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.318625 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b62gq"] Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.328122 5106 scope.go:117] "RemoveContainer" containerID="aa1c7a3fe6484625fd77947d76c0c5f3d78c000976689d8b0d9a97362c87e2b8" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.345476 5106 scope.go:117] "RemoveContainer" containerID="ff5d5946faa8a5808f6ed436c5ff94f8bf2cfef903efa2c9cacfa328fb84fb45" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.362527 5106 scope.go:117] "RemoveContainer" containerID="54e8589037baf1553ef7aca76c9cc210518edbf476ee5f13d443a36cff66c527" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.379609 5106 scope.go:117] "RemoveContainer" containerID="9fffd286dd22c6ace19dba6b50dd5103d5964efc67c7c59e4166d24edc757b06" Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.683544 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mgd7w"] Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.683972 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mgd7w"] Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.683993 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwx2n"] Mar 20 00:12:24 crc kubenswrapper[5106]: I0320 00:12:24.684361 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwx2n" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="registry-server" containerID="cri-o://b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660" gracePeriod=2 Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.172564 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a342c56e-aefd-443c-b37a-af158660104d" path="/var/lib/kubelet/pods/a342c56e-aefd-443c-b37a-af158660104d/volumes" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.173714 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" path="/var/lib/kubelet/pods/fe8416cb-a9a0-45bd-aec9-25549b0c4551/volumes" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.245932 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.282599 5106 generic.go:358] "Generic (PLEG): container finished" podID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerID="b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660" exitCode=0 Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.282671 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerDied","Data":"b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660"} Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.282699 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwx2n" event={"ID":"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8","Type":"ContainerDied","Data":"b795fd9d886b1008651dba585f8bdff443717cd83eba0e4ea93624cd09452adc"} Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.282715 5106 scope.go:117] "RemoveContainer" containerID="b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.282824 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwx2n" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.284608 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566092-6knrz" event={"ID":"92c58c24-f3dc-45d1-bf1f-1a679ae95553","Type":"ContainerStarted","Data":"555cf5368ac0caa29bf7158992d54da737b48532e35d08e7d764c83fd4aa8e55"} Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.287136 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-utilities\") pod \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.287350 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-catalog-content\") pod \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.287402 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldn2w\" (UniqueName: \"kubernetes.io/projected/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-kube-api-access-ldn2w\") pod \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\" (UID: \"29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8\") " Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.288316 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-utilities" (OuterVolumeSpecName: "utilities") pod "29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" (UID: "29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.298275 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-kube-api-access-ldn2w" (OuterVolumeSpecName: "kube-api-access-ldn2w") pod "29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" (UID: "29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8"). InnerVolumeSpecName "kube-api-access-ldn2w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.314955 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" (UID: "29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.318751 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566092-6knrz" podStartSLOduration=9.382766664 podStartE2EDuration="25.318735022s" podCreationTimestamp="2026-03-20 00:12:00 +0000 UTC" firstStartedPulling="2026-03-20 00:12:07.674461634 +0000 UTC m=+182.108195688" lastFinishedPulling="2026-03-20 00:12:23.610429992 +0000 UTC m=+198.044164046" observedRunningTime="2026-03-20 00:12:25.316560875 +0000 UTC m=+199.750294929" watchObservedRunningTime="2026-03-20 00:12:25.318735022 +0000 UTC m=+199.752469076" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.327751 5106 scope.go:117] "RemoveContainer" containerID="fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.354161 5106 scope.go:117] "RemoveContainer" containerID="2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.369990 5106 scope.go:117] "RemoveContainer" containerID="b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660" Mar 20 00:12:25 crc kubenswrapper[5106]: E0320 00:12:25.370383 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660\": container with ID starting with b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660 not found: ID does not exist" containerID="b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.370414 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660"} err="failed to get container status \"b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660\": rpc error: code = NotFound desc = could not find container \"b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660\": container with ID starting with b502b4535e352ec45bda4daf16d7b8689d88e8b727a3f1cb732cb27833301660 not found: ID does not exist" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.370432 5106 scope.go:117] "RemoveContainer" containerID="fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6" Mar 20 00:12:25 crc kubenswrapper[5106]: E0320 00:12:25.370774 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6\": container with ID starting with fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6 not found: ID does not exist" containerID="fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.370796 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6"} err="failed to get container status \"fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6\": rpc error: code = NotFound desc = could not find container \"fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6\": container with ID starting with fb2bb5366740cef5a31d42570f2829b51a69eb30e795539a1504564bbc86d0d6 not found: ID does not exist" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.370808 5106 scope.go:117] "RemoveContainer" containerID="2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406" Mar 20 00:12:25 crc kubenswrapper[5106]: E0320 00:12:25.371114 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406\": container with ID starting with 2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406 not found: ID does not exist" containerID="2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.371133 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406"} err="failed to get container status \"2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406\": rpc error: code = NotFound desc = could not find container \"2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406\": container with ID starting with 2660405be196f8690464e6d750a8f18d00c83bd1c22e19a8e2675c555ab7b406 not found: ID does not exist" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.390146 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.390169 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ldn2w\" (UniqueName: \"kubernetes.io/projected/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-kube-api-access-ldn2w\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.390179 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.517002 5106 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-255d2" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.523711 5106 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-255d2" Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.622700 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwx2n"] Mar 20 00:12:25 crc kubenswrapper[5106]: I0320 00:12:25.627385 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwx2n"] Mar 20 00:12:26 crc kubenswrapper[5106]: I0320 00:12:26.305140 5106 generic.go:358] "Generic (PLEG): container finished" podID="92c58c24-f3dc-45d1-bf1f-1a679ae95553" containerID="555cf5368ac0caa29bf7158992d54da737b48532e35d08e7d764c83fd4aa8e55" exitCode=0 Mar 20 00:12:26 crc kubenswrapper[5106]: I0320 00:12:26.305228 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566092-6knrz" event={"ID":"92c58c24-f3dc-45d1-bf1f-1a679ae95553","Type":"ContainerDied","Data":"555cf5368ac0caa29bf7158992d54da737b48532e35d08e7d764c83fd4aa8e55"} Mar 20 00:12:26 crc kubenswrapper[5106]: I0320 00:12:26.525550 5106 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-04-19 00:07:25 +0000 UTC" deadline="2026-04-15 15:59:11.29376193 +0000 UTC" Mar 20 00:12:26 crc kubenswrapper[5106]: I0320 00:12:26.525625 5106 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="639h46m44.768142025s" Mar 20 00:12:26 crc kubenswrapper[5106]: I0320 00:12:26.799919 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvw5w"] Mar 20 00:12:26 crc kubenswrapper[5106]: I0320 00:12:26.800213 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fvw5w" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="registry-server" containerID="cri-o://38cc5ddeb422e8c1418b76ad9e17d4e43f7de62f1c49bdead6baf1d2bf7c2d6f" gracePeriod=2 Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.169108 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" path="/var/lib/kubelet/pods/29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8/volumes" Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.321624 5106 generic.go:358] "Generic (PLEG): container finished" podID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerID="38cc5ddeb422e8c1418b76ad9e17d4e43f7de62f1c49bdead6baf1d2bf7c2d6f" exitCode=0 Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.321996 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerDied","Data":"38cc5ddeb422e8c1418b76ad9e17d4e43f7de62f1c49bdead6baf1d2bf7c2d6f"} Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.633257 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.740784 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9t5r\" (UniqueName: \"kubernetes.io/projected/92c58c24-f3dc-45d1-bf1f-1a679ae95553-kube-api-access-d9t5r\") pod \"92c58c24-f3dc-45d1-bf1f-1a679ae95553\" (UID: \"92c58c24-f3dc-45d1-bf1f-1a679ae95553\") " Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.761282 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c58c24-f3dc-45d1-bf1f-1a679ae95553-kube-api-access-d9t5r" (OuterVolumeSpecName: "kube-api-access-d9t5r") pod "92c58c24-f3dc-45d1-bf1f-1a679ae95553" (UID: "92c58c24-f3dc-45d1-bf1f-1a679ae95553"). InnerVolumeSpecName "kube-api-access-d9t5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:27 crc kubenswrapper[5106]: I0320 00:12:27.841943 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9t5r\" (UniqueName: \"kubernetes.io/projected/92c58c24-f3dc-45d1-bf1f-1a679ae95553-kube-api-access-d9t5r\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.291655 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.330341 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fvw5w" event={"ID":"3ff85b1d-ffcc-44c1-a340-5e15b96f36db","Type":"ContainerDied","Data":"a42e7cc64633ccace209510f988e9db9bbb6174d5d192bfc5b9f5764820170f5"} Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.330373 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fvw5w" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.330391 5106 scope.go:117] "RemoveContainer" containerID="38cc5ddeb422e8c1418b76ad9e17d4e43f7de62f1c49bdead6baf1d2bf7c2d6f" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.333384 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566092-6knrz" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.333401 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566092-6knrz" event={"ID":"92c58c24-f3dc-45d1-bf1f-1a679ae95553","Type":"ContainerDied","Data":"0a7aa24cb916c6210458b00dcaca4ee807c4c8d280ab3786ab6fc8e7b0f9fbfb"} Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.333442 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7aa24cb916c6210458b00dcaca4ee807c4c8d280ab3786ab6fc8e7b0f9fbfb" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.350731 5106 scope.go:117] "RemoveContainer" containerID="e55a45a6b96fc2a721a626475ffbe15c1b8f6278ea945dd39860df902f563fde" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.368821 5106 scope.go:117] "RemoveContainer" containerID="fd25608c3a0b7dfe658a9be9184410c5f0b162f924e3dac2e025781182535ef4" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.451018 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-utilities\") pod \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.451124 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q2hc\" (UniqueName: \"kubernetes.io/projected/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-kube-api-access-5q2hc\") pod \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.451195 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-catalog-content\") pod \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\" (UID: \"3ff85b1d-ffcc-44c1-a340-5e15b96f36db\") " Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.452234 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-utilities" (OuterVolumeSpecName: "utilities") pod "3ff85b1d-ffcc-44c1-a340-5e15b96f36db" (UID: "3ff85b1d-ffcc-44c1-a340-5e15b96f36db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.456879 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-kube-api-access-5q2hc" (OuterVolumeSpecName: "kube-api-access-5q2hc") pod "3ff85b1d-ffcc-44c1-a340-5e15b96f36db" (UID: "3ff85b1d-ffcc-44c1-a340-5e15b96f36db"). InnerVolumeSpecName "kube-api-access-5q2hc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.552481 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.552506 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5q2hc\" (UniqueName: \"kubernetes.io/projected/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-kube-api-access-5q2hc\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.841830 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ff85b1d-ffcc-44c1-a340-5e15b96f36db" (UID: "3ff85b1d-ffcc-44c1-a340-5e15b96f36db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.857119 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff85b1d-ffcc-44c1-a340-5e15b96f36db-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.957517 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fvw5w"] Mar 20 00:12:28 crc kubenswrapper[5106]: I0320 00:12:28.959947 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fvw5w"] Mar 20 00:12:29 crc kubenswrapper[5106]: I0320 00:12:29.173143 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" path="/var/lib/kubelet/pods/3ff85b1d-ffcc-44c1-a340-5e15b96f36db/volumes" Mar 20 00:12:30 crc kubenswrapper[5106]: I0320 00:12:30.494912 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-ss8gd" Mar 20 00:12:30 crc kubenswrapper[5106]: I0320 00:12:30.673490 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-zbpp6"] Mar 20 00:12:40 crc kubenswrapper[5106]: I0320 00:12:40.876952 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7675cb6858-lssg4"] Mar 20 00:12:40 crc kubenswrapper[5106]: I0320 00:12:40.877759 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" podUID="c318e5ed-5262-4842-ba76-4ac168e42455" containerName="controller-manager" containerID="cri-o://6548209dfd9f24abed7e4a8a5258abc48c5a97ed6dcf781d6cb9ddb9ebb12089" gracePeriod=30 Mar 20 00:12:40 crc kubenswrapper[5106]: I0320 00:12:40.910220 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4"] Mar 20 00:12:40 crc kubenswrapper[5106]: I0320 00:12:40.910900 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" podUID="fc1bf81e-8b70-4151-b76c-2904905725f6" containerName="route-controller-manager" containerID="cri-o://23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d" gracePeriod=30 Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.414288 5106 generic.go:358] "Generic (PLEG): container finished" podID="fc1bf81e-8b70-4151-b76c-2904905725f6" containerID="23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d" exitCode=0 Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.414395 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" event={"ID":"fc1bf81e-8b70-4151-b76c-2904905725f6","Type":"ContainerDied","Data":"23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d"} Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.414429 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" event={"ID":"fc1bf81e-8b70-4151-b76c-2904905725f6","Type":"ContainerDied","Data":"d7097d7f63a2107797dcb28783adbc24eeb113db45f3c112b66243df73602692"} Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.414442 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7097d7f63a2107797dcb28783adbc24eeb113db45f3c112b66243df73602692" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.415706 5106 generic.go:358] "Generic (PLEG): container finished" podID="c318e5ed-5262-4842-ba76-4ac168e42455" containerID="6548209dfd9f24abed7e4a8a5258abc48c5a97ed6dcf781d6cb9ddb9ebb12089" exitCode=0 Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.415773 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" event={"ID":"c318e5ed-5262-4842-ba76-4ac168e42455","Type":"ContainerDied","Data":"6548209dfd9f24abed7e4a8a5258abc48c5a97ed6dcf781d6cb9ddb9ebb12089"} Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.425685 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454011 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8"] Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454563 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="68151e8d-2823-4f28-86ce-4a7508597ece" containerName="pruner" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454600 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="68151e8d-2823-4f28-86ce-4a7508597ece" containerName="pruner" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454613 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="92c58c24-f3dc-45d1-bf1f-1a679ae95553" containerName="oc" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454620 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c58c24-f3dc-45d1-bf1f-1a679ae95553" containerName="oc" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454628 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454635 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454644 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454650 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454660 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454667 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454677 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454682 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454691 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454697 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454704 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc1bf81e-8b70-4151-b76c-2904905725f6" containerName="route-controller-manager" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454709 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1bf81e-8b70-4151-b76c-2904905725f6" containerName="route-controller-manager" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454719 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454724 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454735 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454741 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454751 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454760 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="extract-utilities" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454767 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454774 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454780 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454785 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454795 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454800 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="extract-content" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454807 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454813 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454902 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="92c58c24-f3dc-45d1-bf1f-1a679ae95553" containerName="oc" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454913 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="a342c56e-aefd-443c-b37a-af158660104d" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454922 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="68151e8d-2823-4f28-86ce-4a7508597ece" containerName="pruner" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454932 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ff85b1d-ffcc-44c1-a340-5e15b96f36db" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454938 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="fe8416cb-a9a0-45bd-aec9-25549b0c4551" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454945 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc1bf81e-8b70-4151-b76c-2904905725f6" containerName="route-controller-manager" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.454952 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="29e1a1c1-f9e1-4524-bb8c-1e2a760e01f8" containerName="registry-server" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.459936 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.477477 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8"] Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540500 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-config\") pod \"fc1bf81e-8b70-4151-b76c-2904905725f6\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540628 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc1bf81e-8b70-4151-b76c-2904905725f6-serving-cert\") pod \"fc1bf81e-8b70-4151-b76c-2904905725f6\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540661 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgwd\" (UniqueName: \"kubernetes.io/projected/fc1bf81e-8b70-4151-b76c-2904905725f6-kube-api-access-kzgwd\") pod \"fc1bf81e-8b70-4151-b76c-2904905725f6\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540688 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fc1bf81e-8b70-4151-b76c-2904905725f6-tmp\") pod \"fc1bf81e-8b70-4151-b76c-2904905725f6\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540762 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-client-ca\") pod \"fc1bf81e-8b70-4151-b76c-2904905725f6\" (UID: \"fc1bf81e-8b70-4151-b76c-2904905725f6\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540886 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-config\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540911 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzjf\" (UniqueName: \"kubernetes.io/projected/8bac7145-7e39-408b-bc0f-5971365fc72c-kube-api-access-mrzjf\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540952 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-client-ca\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.540970 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bac7145-7e39-408b-bc0f-5971365fc72c-serving-cert\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.541020 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8bac7145-7e39-408b-bc0f-5971365fc72c-tmp\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.541982 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc1bf81e-8b70-4151-b76c-2904905725f6" (UID: "fc1bf81e-8b70-4151-b76c-2904905725f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.543098 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-config" (OuterVolumeSpecName: "config") pod "fc1bf81e-8b70-4151-b76c-2904905725f6" (UID: "fc1bf81e-8b70-4151-b76c-2904905725f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.544970 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1bf81e-8b70-4151-b76c-2904905725f6-tmp" (OuterVolumeSpecName: "tmp") pod "fc1bf81e-8b70-4151-b76c-2904905725f6" (UID: "fc1bf81e-8b70-4151-b76c-2904905725f6"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.557279 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1bf81e-8b70-4151-b76c-2904905725f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc1bf81e-8b70-4151-b76c-2904905725f6" (UID: "fc1bf81e-8b70-4151-b76c-2904905725f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.558345 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1bf81e-8b70-4151-b76c-2904905725f6-kube-api-access-kzgwd" (OuterVolumeSpecName: "kube-api-access-kzgwd") pod "fc1bf81e-8b70-4151-b76c-2904905725f6" (UID: "fc1bf81e-8b70-4151-b76c-2904905725f6"). InnerVolumeSpecName "kube-api-access-kzgwd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642758 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8bac7145-7e39-408b-bc0f-5971365fc72c-tmp\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642831 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-config\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642860 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzjf\" (UniqueName: \"kubernetes.io/projected/8bac7145-7e39-408b-bc0f-5971365fc72c-kube-api-access-mrzjf\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642902 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-client-ca\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642922 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bac7145-7e39-408b-bc0f-5971365fc72c-serving-cert\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642970 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc1bf81e-8b70-4151-b76c-2904905725f6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642982 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kzgwd\" (UniqueName: \"kubernetes.io/projected/fc1bf81e-8b70-4151-b76c-2904905725f6-kube-api-access-kzgwd\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.642993 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fc1bf81e-8b70-4151-b76c-2904905725f6-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.643005 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.643014 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc1bf81e-8b70-4151-b76c-2904905725f6-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.644537 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8bac7145-7e39-408b-bc0f-5971365fc72c-tmp\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.645052 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-config\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.645208 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-client-ca\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.648140 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bac7145-7e39-408b-bc0f-5971365fc72c-serving-cert\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.652200 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.660283 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzjf\" (UniqueName: \"kubernetes.io/projected/8bac7145-7e39-408b-bc0f-5971365fc72c-kube-api-access-mrzjf\") pod \"route-controller-manager-7d5d9498-dfsw8\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.676212 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8b4599bc8-hpfh7"] Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.676856 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c318e5ed-5262-4842-ba76-4ac168e42455" containerName="controller-manager" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.676875 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="c318e5ed-5262-4842-ba76-4ac168e42455" containerName="controller-manager" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.676981 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="c318e5ed-5262-4842-ba76-4ac168e42455" containerName="controller-manager" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.693473 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b4599bc8-hpfh7"] Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.693628 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.743980 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpkzf\" (UniqueName: \"kubernetes.io/projected/c318e5ed-5262-4842-ba76-4ac168e42455-kube-api-access-fpkzf\") pod \"c318e5ed-5262-4842-ba76-4ac168e42455\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744055 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-config\") pod \"c318e5ed-5262-4842-ba76-4ac168e42455\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744076 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-client-ca\") pod \"c318e5ed-5262-4842-ba76-4ac168e42455\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744109 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c318e5ed-5262-4842-ba76-4ac168e42455-tmp\") pod \"c318e5ed-5262-4842-ba76-4ac168e42455\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744186 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c318e5ed-5262-4842-ba76-4ac168e42455-serving-cert\") pod \"c318e5ed-5262-4842-ba76-4ac168e42455\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744221 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-proxy-ca-bundles\") pod \"c318e5ed-5262-4842-ba76-4ac168e42455\" (UID: \"c318e5ed-5262-4842-ba76-4ac168e42455\") " Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744332 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-config\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744359 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjh62\" (UniqueName: \"kubernetes.io/projected/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-kube-api-access-fjh62\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744378 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-serving-cert\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744411 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-client-ca\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744430 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-tmp\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744484 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-proxy-ca-bundles\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744859 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-client-ca" (OuterVolumeSpecName: "client-ca") pod "c318e5ed-5262-4842-ba76-4ac168e42455" (UID: "c318e5ed-5262-4842-ba76-4ac168e42455"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.744887 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-config" (OuterVolumeSpecName: "config") pod "c318e5ed-5262-4842-ba76-4ac168e42455" (UID: "c318e5ed-5262-4842-ba76-4ac168e42455"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.745004 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c318e5ed-5262-4842-ba76-4ac168e42455-tmp" (OuterVolumeSpecName: "tmp") pod "c318e5ed-5262-4842-ba76-4ac168e42455" (UID: "c318e5ed-5262-4842-ba76-4ac168e42455"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.745274 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c318e5ed-5262-4842-ba76-4ac168e42455" (UID: "c318e5ed-5262-4842-ba76-4ac168e42455"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.747063 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c318e5ed-5262-4842-ba76-4ac168e42455-kube-api-access-fpkzf" (OuterVolumeSpecName: "kube-api-access-fpkzf") pod "c318e5ed-5262-4842-ba76-4ac168e42455" (UID: "c318e5ed-5262-4842-ba76-4ac168e42455"). InnerVolumeSpecName "kube-api-access-fpkzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.747999 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c318e5ed-5262-4842-ba76-4ac168e42455-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c318e5ed-5262-4842-ba76-4ac168e42455" (UID: "c318e5ed-5262-4842-ba76-4ac168e42455"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.778021 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.845455 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-proxy-ca-bundles\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.845730 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-config\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.845830 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fjh62\" (UniqueName: \"kubernetes.io/projected/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-kube-api-access-fjh62\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.845912 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-serving-cert\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846008 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-client-ca\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846088 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-tmp\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846201 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c318e5ed-5262-4842-ba76-4ac168e42455-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846266 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846328 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fpkzf\" (UniqueName: \"kubernetes.io/projected/c318e5ed-5262-4842-ba76-4ac168e42455-kube-api-access-fpkzf\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846382 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846442 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c318e5ed-5262-4842-ba76-4ac168e42455-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.846504 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c318e5ed-5262-4842-ba76-4ac168e42455-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.847156 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-tmp\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.848678 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-client-ca\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.849426 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-proxy-ca-bundles\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.850780 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-config\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.851775 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-serving-cert\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:41 crc kubenswrapper[5106]: I0320 00:12:41.865169 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjh62\" (UniqueName: \"kubernetes.io/projected/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-kube-api-access-fjh62\") pod \"controller-manager-8b4599bc8-hpfh7\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.006755 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.026140 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8"] Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.217463 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8b4599bc8-hpfh7"] Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.422239 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" event={"ID":"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd","Type":"ContainerStarted","Data":"17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc"} Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.422295 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" event={"ID":"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd","Type":"ContainerStarted","Data":"44bf4dc970c40aca8e6050055617d2c62394bee182c39d99ab2e1c45aa1c06ef"} Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.423496 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.424904 5106 patch_prober.go:28] interesting pod/controller-manager-8b4599bc8-hpfh7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.424966 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" podUID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.426728 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.427247 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7675cb6858-lssg4" event={"ID":"c318e5ed-5262-4842-ba76-4ac168e42455","Type":"ContainerDied","Data":"d9bf899886d3c6caec0ab4bb6b22319a909984f66ce1d24a119087941fc691f5"} Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.427331 5106 scope.go:117] "RemoveContainer" containerID="6548209dfd9f24abed7e4a8a5258abc48c5a97ed6dcf781d6cb9ddb9ebb12089" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.428908 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.430673 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" event={"ID":"8bac7145-7e39-408b-bc0f-5971365fc72c","Type":"ContainerStarted","Data":"cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476"} Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.430731 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" event={"ID":"8bac7145-7e39-408b-bc0f-5971365fc72c","Type":"ContainerStarted","Data":"ebee5db7b3ca8d47cc11827400b57b69002b4303ee0bf001d41fd80c2d9ae6fb"} Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.431011 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.442062 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" podStartSLOduration=2.442046476 podStartE2EDuration="2.442046476s" podCreationTimestamp="2026-03-20 00:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:42.439112129 +0000 UTC m=+216.872846203" watchObservedRunningTime="2026-03-20 00:12:42.442046476 +0000 UTC m=+216.875780530" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.472737 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" podStartSLOduration=1.472719798 podStartE2EDuration="1.472719798s" podCreationTimestamp="2026-03-20 00:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:12:42.470180682 +0000 UTC m=+216.903914736" watchObservedRunningTime="2026-03-20 00:12:42.472719798 +0000 UTC m=+216.906453852" Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.494283 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4"] Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.497630 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cc6d7676-hvjf4"] Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.510346 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7675cb6858-lssg4"] Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.518242 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7675cb6858-lssg4"] Mar 20 00:12:42 crc kubenswrapper[5106]: I0320 00:12:42.905249 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:12:43 crc kubenswrapper[5106]: I0320 00:12:43.168842 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c318e5ed-5262-4842-ba76-4ac168e42455" path="/var/lib/kubelet/pods/c318e5ed-5262-4842-ba76-4ac168e42455/volumes" Mar 20 00:12:43 crc kubenswrapper[5106]: I0320 00:12:43.169689 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1bf81e-8b70-4151-b76c-2904905725f6" path="/var/lib/kubelet/pods/fc1bf81e-8b70-4151-b76c-2904905725f6/volumes" Mar 20 00:12:43 crc kubenswrapper[5106]: I0320 00:12:43.443122 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:12:55 crc kubenswrapper[5106]: I0320 00:12:55.374046 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:12:55 crc kubenswrapper[5106]: I0320 00:12:55.374918 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:12:55 crc kubenswrapper[5106]: I0320 00:12:55.705507 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" containerID="cri-o://0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c" gracePeriod=15 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.128343 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142735 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-ocp-branding-template\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142784 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-login\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142816 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-session\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142877 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-serving-cert\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142912 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-idp-0-file-data\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142955 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-dir\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.142979 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-cliconfig\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143014 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-error\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143042 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-service-ca\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143063 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-policies\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143163 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-router-certs\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143190 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-trusted-ca-bundle\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143217 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/af8b1c72-0d76-40cc-9135-92bdefd2a461-kube-api-access-ml5xh\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.143241 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-provider-selection\") pod \"af8b1c72-0d76-40cc-9135-92bdefd2a461\" (UID: \"af8b1c72-0d76-40cc-9135-92bdefd2a461\") " Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.144141 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.144155 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.144656 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.144739 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.145408 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.156138 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.159665 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af8b1c72-0d76-40cc-9135-92bdefd2a461-kube-api-access-ml5xh" (OuterVolumeSpecName: "kube-api-access-ml5xh") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "kube-api-access-ml5xh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.162123 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.164505 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.164516 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.164744 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.167926 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.168228 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.174178 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "af8b1c72-0d76-40cc-9135-92bdefd2a461" (UID: "af8b1c72-0d76-40cc-9135-92bdefd2a461"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.178715 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.179349 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.179366 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.179472 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerName="oauth-openshift" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.187967 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.188185 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244713 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244769 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244794 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjhf\" (UniqueName: \"kubernetes.io/projected/93e57ca7-278b-47c3-a3ae-7c07849de478-kube-api-access-stjhf\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244816 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-router-certs\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244836 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244853 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-login\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244888 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-audit-policies\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244905 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244941 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93e57ca7-278b-47c3-a3ae-7c07849de478-audit-dir\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244964 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-session\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244980 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-service-ca\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.244998 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245017 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245065 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-error\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245102 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245112 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245123 5106 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245135 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245144 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245153 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245162 5106 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245171 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245180 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245190 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ml5xh\" (UniqueName: \"kubernetes.io/projected/af8b1c72-0d76-40cc-9135-92bdefd2a461-kube-api-access-ml5xh\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245199 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245208 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245218 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.245227 5106 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/af8b1c72-0d76-40cc-9135-92bdefd2a461-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346301 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-router-certs\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346393 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346445 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-login\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346512 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-audit-policies\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346547 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346617 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93e57ca7-278b-47c3-a3ae-7c07849de478-audit-dir\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346660 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-session\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346691 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-service-ca\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346731 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346770 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346835 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-error\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346926 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.346976 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.347017 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-stjhf\" (UniqueName: \"kubernetes.io/projected/93e57ca7-278b-47c3-a3ae-7c07849de478-kube-api-access-stjhf\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.348758 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93e57ca7-278b-47c3-a3ae-7c07849de478-audit-dir\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.348895 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.348908 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-service-ca\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.350309 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-audit-policies\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.350731 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.350759 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-error\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.350875 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.351909 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.352386 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-session\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.352464 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.352788 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-user-template-login\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.353225 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-router-certs\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.354484 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/93e57ca7-278b-47c3-a3ae-7c07849de478-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.373395 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-stjhf\" (UniqueName: \"kubernetes.io/projected/93e57ca7-278b-47c3-a3ae-7c07849de478-kube-api-access-stjhf\") pod \"oauth-openshift-575dc4b4cf-qlhmn\" (UID: \"93e57ca7-278b-47c3-a3ae-7c07849de478\") " pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.516604 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.516875 5106 generic.go:358] "Generic (PLEG): container finished" podID="af8b1c72-0d76-40cc-9135-92bdefd2a461" containerID="0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c" exitCode=0 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.517094 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.517188 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" event={"ID":"af8b1c72-0d76-40cc-9135-92bdefd2a461","Type":"ContainerDied","Data":"0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c"} Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.517220 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-zbpp6" event={"ID":"af8b1c72-0d76-40cc-9135-92bdefd2a461","Type":"ContainerDied","Data":"8067bb27876a0cc450f1dc3d5cf29040cf4f3906dc95582bfd1a5c0b9c6e1526"} Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.517237 5106 scope.go:117] "RemoveContainer" containerID="0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.519043 5106 generic.go:358] "Generic (PLEG): container finished" podID="884b9b2b-1ff2-4758-b964-5030e8973573" containerID="1d07106e238e134efd8c3a707cf028575a634888be0f3a8cd3d9946829b42443" exitCode=0 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.519132 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29566080-czff6" event={"ID":"884b9b2b-1ff2-4758-b964-5030e8973573","Type":"ContainerDied","Data":"1d07106e238e134efd8c3a707cf028575a634888be0f3a8cd3d9946829b42443"} Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.551192 5106 scope.go:117] "RemoveContainer" containerID="0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c" Mar 20 00:12:56 crc kubenswrapper[5106]: E0320 00:12:56.551669 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c\": container with ID starting with 0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c not found: ID does not exist" containerID="0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.551715 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c"} err="failed to get container status \"0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c\": rpc error: code = NotFound desc = could not find container \"0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c\": container with ID starting with 0e46b7df796828dfd1add8e76ff1d2fbd11512ef74debcf0099651fb97c7e30c not found: ID does not exist" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.568186 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-zbpp6"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.572672 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-zbpp6"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.816431 5106 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.834550 5106 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.834613 5106 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.834772 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835247 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4" gracePeriod=15 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835331 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835344 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835339 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fefd59f795733cb8744cc94f38c6a15e90eef9eb0e9824f61c74ad917a5fce4b" gracePeriod=15 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835424 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe" gracePeriod=15 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835464 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4" gracePeriod=15 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835362 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835536 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835567 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835642 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835658 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835664 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835699 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835707 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835546 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574" gracePeriod=15 Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835720 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835726 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835733 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835740 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835749 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835768 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835973 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.835988 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836003 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836013 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836022 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836032 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836040 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836164 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836173 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836187 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836194 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836342 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.836357 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.839052 5106 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854421 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854475 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854503 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854527 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854598 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854635 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854657 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854676 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854754 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.854805 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.909080 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: E0320 00:12:56.909786 5106 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956485 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956540 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956562 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956598 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956625 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956672 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956703 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956727 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956910 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.956990 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957107 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957154 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957184 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957693 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957772 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957809 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957845 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957876 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.957980 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:56 crc kubenswrapper[5106]: I0320 00:12:56.958014 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.168171 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.169394 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af8b1c72-0d76-40cc-9135-92bdefd2a461" path="/var/lib/kubelet/pods/af8b1c72-0d76-40cc-9135-92bdefd2a461/volumes" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.211322 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:57 crc kubenswrapper[5106]: E0320 00:12:57.239672 5106 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 20 00:12:57 crc kubenswrapper[5106]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846" Netns:"/var/run/netns/04351578-f49d-4469-b491-6129629127f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:12:57 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 00:12:57 crc kubenswrapper[5106]: > Mar 20 00:12:57 crc kubenswrapper[5106]: E0320 00:12:57.239762 5106 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Mar 20 00:12:57 crc kubenswrapper[5106]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846" Netns:"/var/run/netns/04351578-f49d-4469-b491-6129629127f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:12:57 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 00:12:57 crc kubenswrapper[5106]: > pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:57 crc kubenswrapper[5106]: E0320 00:12:57.239793 5106 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Mar 20 00:12:57 crc kubenswrapper[5106]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846" Netns:"/var/run/netns/04351578-f49d-4469-b491-6129629127f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:12:57 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 00:12:57 crc kubenswrapper[5106]: > pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:12:57 crc kubenswrapper[5106]: E0320 00:12:57.239884 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846\\\" Netns:\\\"/var/run/netns/04351578-f49d-4469-b491-6129629127f8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s\\\": dial tcp 38.102.83.150:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:12:57 crc kubenswrapper[5106]: E0320 00:12:57.240399 5106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event=< Mar 20 00:12:57 crc kubenswrapper[5106]: &Event{ObjectMeta:{oauth-openshift-575dc4b4cf-qlhmn.189e644b2b266dfb openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-575dc4b4cf-qlhmn,UID:93e57ca7-278b-47c3-a3ae-7c07849de478,APIVersion:v1,ResourceVersion:39341,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846" Netns:"/var/run/netns/04351578-f49d-4469-b491-6129629127f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:12:57 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:12:57.239817723 +0000 UTC m=+231.673551777,LastTimestamp:2026-03-20 00:12:57.239817723 +0000 UTC m=+231.673551777,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 00:12:57 crc kubenswrapper[5106]: > Mar 20 00:12:57 crc kubenswrapper[5106]: W0320 00:12:57.245894 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-56c678f2098a6b174ed3a2263eb2e6e6c95c4b3436827476a289eacce2834986 WatchSource:0}: Error finding container 56c678f2098a6b174ed3a2263eb2e6e6c95c4b3436827476a289eacce2834986: Status 404 returned error can't find the container with id 56c678f2098a6b174ed3a2263eb2e6e6c95c4b3436827476a289eacce2834986 Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.527010 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.528671 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.529271 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="fefd59f795733cb8744cc94f38c6a15e90eef9eb0e9824f61c74ad917a5fce4b" exitCode=0 Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.529293 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4" exitCode=0 Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.529300 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe" exitCode=0 Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.529306 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574" exitCode=2 Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.529350 5106 scope.go:117] "RemoveContainer" containerID="b378391966874ad3d87c2ab15525eeba5757bc87a130a8e3ec84fe69d154a09d" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.539618 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c"} Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.539674 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"56c678f2098a6b174ed3a2263eb2e6e6c95c4b3436827476a289eacce2834986"} Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.540006 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.540516 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:57 crc kubenswrapper[5106]: E0320 00:12:57.540663 5106 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.542559 5106 generic.go:358] "Generic (PLEG): container finished" podID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" containerID="f05fe504393f6117b964aa0776265f7d76f156c68b45e3ea8e008a8e50988f13" exitCode=0 Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.542707 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"316971ca-bb80-40a7-9f09-fe5ef9fb388b","Type":"ContainerDied","Data":"f05fe504393f6117b964aa0776265f7d76f156c68b45e3ea8e008a8e50988f13"} Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.543549 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.543988 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.813185 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.814047 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.814535 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.870517 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57s6r\" (UniqueName: \"kubernetes.io/projected/884b9b2b-1ff2-4758-b964-5030e8973573-kube-api-access-57s6r\") pod \"884b9b2b-1ff2-4758-b964-5030e8973573\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.870594 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/884b9b2b-1ff2-4758-b964-5030e8973573-serviceca\") pod \"884b9b2b-1ff2-4758-b964-5030e8973573\" (UID: \"884b9b2b-1ff2-4758-b964-5030e8973573\") " Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.871291 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/884b9b2b-1ff2-4758-b964-5030e8973573-serviceca" (OuterVolumeSpecName: "serviceca") pod "884b9b2b-1ff2-4758-b964-5030e8973573" (UID: "884b9b2b-1ff2-4758-b964-5030e8973573"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.877141 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/884b9b2b-1ff2-4758-b964-5030e8973573-kube-api-access-57s6r" (OuterVolumeSpecName: "kube-api-access-57s6r") pod "884b9b2b-1ff2-4758-b964-5030e8973573" (UID: "884b9b2b-1ff2-4758-b964-5030e8973573"). InnerVolumeSpecName "kube-api-access-57s6r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.972947 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57s6r\" (UniqueName: \"kubernetes.io/projected/884b9b2b-1ff2-4758-b964-5030e8973573-kube-api-access-57s6r\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:57 crc kubenswrapper[5106]: I0320 00:12:57.972999 5106 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/884b9b2b-1ff2-4758-b964-5030e8973573-serviceca\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.549263 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29566080-czff6" event={"ID":"884b9b2b-1ff2-4758-b964-5030e8973573","Type":"ContainerDied","Data":"31b7e39aa5aafabf81b9c417bbb28dbd1dce1245a51df2fc8f51d0ca88e68ffd"} Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.549597 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31b7e39aa5aafabf81b9c417bbb28dbd1dce1245a51df2fc8f51d0ca88e68ffd" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.549312 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29566080-czff6" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.553400 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.564344 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.564991 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.863432 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.863971 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.864258 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890265 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kube-api-access\") pod \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890367 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-var-lock\") pod \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890428 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kubelet-dir\") pod \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\" (UID: \"316971ca-bb80-40a7-9f09-fe5ef9fb388b\") " Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890488 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-var-lock" (OuterVolumeSpecName: "var-lock") pod "316971ca-bb80-40a7-9f09-fe5ef9fb388b" (UID: "316971ca-bb80-40a7-9f09-fe5ef9fb388b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890538 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "316971ca-bb80-40a7-9f09-fe5ef9fb388b" (UID: "316971ca-bb80-40a7-9f09-fe5ef9fb388b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890915 5106 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.890939 5106 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.908287 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "316971ca-bb80-40a7-9f09-fe5ef9fb388b" (UID: "316971ca-bb80-40a7-9f09-fe5ef9fb388b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:12:58 crc kubenswrapper[5106]: I0320 00:12:58.992274 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/316971ca-bb80-40a7-9f09-fe5ef9fb388b-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.563184 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.563188 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"316971ca-bb80-40a7-9f09-fe5ef9fb388b","Type":"ContainerDied","Data":"9e3c7c50b838d14217f20b16e207796169184cf23d1bedc0126a222ae5a7c739"} Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.565071 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e3c7c50b838d14217f20b16e207796169184cf23d1bedc0126a222ae5a7c739" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.566530 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.567394 5106 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4" exitCode=0 Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.569694 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.569958 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.915893 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.916992 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.917505 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.917871 5106 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:12:59 crc kubenswrapper[5106]: I0320 00:12:59.918241 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005019 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005074 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005107 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005167 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005188 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005211 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005329 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005432 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005630 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005737 5106 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005756 5106 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005767 5106 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.005777 5106 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.008777 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.107188 5106 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.576224 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.577838 5106 scope.go:117] "RemoveContainer" containerID="fefd59f795733cb8744cc94f38c6a15e90eef9eb0e9824f61c74ad917a5fce4b" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.577855 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.596779 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.596967 5106 scope.go:117] "RemoveContainer" containerID="357165da23ae68cc29488e922524a979073592e0ffb4fdcc0da8b802db9d5bb4" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.597260 5106 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.597573 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.612465 5106 scope.go:117] "RemoveContainer" containerID="eab7be65abd24a517db92743a8a03f7676c55bf81026eef424dbe1d378a875fe" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.627121 5106 scope.go:117] "RemoveContainer" containerID="6babd6b08e057d4097e85e4433b70d83d8c9e805b8dbb95ae04731698f9e1574" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.644633 5106 scope.go:117] "RemoveContainer" containerID="d04118a5073d6c737a0aed2378b75b92a90a05949684a9b5a02124cbe72948d4" Mar 20 00:13:00 crc kubenswrapper[5106]: I0320 00:13:00.657396 5106 scope.go:117] "RemoveContainer" containerID="f3cd1e6527d3cfceadb27e7f75d1a954b3bfa367359d1c1b933e1b510b31b620" Mar 20 00:13:01 crc kubenswrapper[5106]: I0320 00:13:01.169683 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.473212 5106 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.473622 5106 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.474005 5106 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.474323 5106 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.474638 5106 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:01 crc kubenswrapper[5106]: I0320 00:13:01.474670 5106 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.475070 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Mar 20 00:13:01 crc kubenswrapper[5106]: E0320 00:13:01.675681 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Mar 20 00:13:02 crc kubenswrapper[5106]: E0320 00:13:02.001893 5106 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event=< Mar 20 00:13:02 crc kubenswrapper[5106]: &Event{ObjectMeta:{oauth-openshift-575dc4b4cf-qlhmn.189e644b2b266dfb openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-575dc4b4cf-qlhmn,UID:93e57ca7-278b-47c3-a3ae-7c07849de478,APIVersion:v1,ResourceVersion:39341,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846" Netns:"/var/run/netns/04351578-f49d-4469-b491-6129629127f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=8ad494f6a24b71a425ca1b9dad3c12fa354ec5d39c10f117ce62ece99ebe9846;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:13:02 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 00:12:57.239817723 +0000 UTC m=+231.673551777,LastTimestamp:2026-03-20 00:12:57.239817723 +0000 UTC m=+231.673551777,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 00:13:02 crc kubenswrapper[5106]: > Mar 20 00:13:02 crc kubenswrapper[5106]: E0320 00:13:02.076614 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Mar 20 00:13:02 crc kubenswrapper[5106]: E0320 00:13:02.878172 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Mar 20 00:13:04 crc kubenswrapper[5106]: E0320 00:13:04.479566 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Mar 20 00:13:07 crc kubenswrapper[5106]: I0320 00:13:07.169034 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:07 crc kubenswrapper[5106]: I0320 00:13:07.169293 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:07 crc kubenswrapper[5106]: E0320 00:13:07.680394 5106 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Mar 20 00:13:10 crc kubenswrapper[5106]: I0320 00:13:10.160490 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:10 crc kubenswrapper[5106]: I0320 00:13:10.161174 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:10 crc kubenswrapper[5106]: E0320 00:13:10.837656 5106 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 20 00:13:10 crc kubenswrapper[5106]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e" Netns:"/var/run/netns/1f8edc32-f945-45a6-8250-e6360dd8822e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:13:10 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 00:13:10 crc kubenswrapper[5106]: > Mar 20 00:13:10 crc kubenswrapper[5106]: E0320 00:13:10.838055 5106 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Mar 20 00:13:10 crc kubenswrapper[5106]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e" Netns:"/var/run/netns/1f8edc32-f945-45a6-8250-e6360dd8822e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:13:10 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 00:13:10 crc kubenswrapper[5106]: > pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:10 crc kubenswrapper[5106]: E0320 00:13:10.838085 5106 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Mar 20 00:13:10 crc kubenswrapper[5106]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e" Netns:"/var/run/netns/1f8edc32-f945-45a6-8250-e6360dd8822e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s": dial tcp 38.102.83.150:6443: connect: connection refused Mar 20 00:13:10 crc kubenswrapper[5106]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 00:13:10 crc kubenswrapper[5106]: > pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:10 crc kubenswrapper[5106]: E0320 00:13:10.838166 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication_93e57ca7-278b-47c3-a3ae-7c07849de478_0(fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e): error adding pod openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e\\\" Netns:\\\"/var/run/netns/1f8edc32-f945-45a6-8250-e6360dd8822e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-575dc4b4cf-qlhmn;K8S_POD_INFRA_CONTAINER_ID=fa6d02520334fe9c8b97c0f6018424efcace5d5c28718eec777503f9db9fd87e;K8S_POD_UID=93e57ca7-278b-47c3-a3ae-7c07849de478\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn] networking: Multus: [openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn/93e57ca7-278b-47c3-a3ae-7c07849de478]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: SetNetworkStatus: failed to update the pod oauth-openshift-575dc4b4cf-qlhmn in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-575dc4b4cf-qlhmn?timeout=1m0s\\\": dial tcp 38.102.83.150:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.160203 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.161318 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.161885 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.179027 5106 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.179066 5106 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:11 crc kubenswrapper[5106]: E0320 00:13:11.179617 5106 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.179997 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:11 crc kubenswrapper[5106]: W0320 00:13:11.199304 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-cd945be03fe25838394bfbb5cd25eb0e44dae27f2485e39d4861de6866512be8 WatchSource:0}: Error finding container cd945be03fe25838394bfbb5cd25eb0e44dae27f2485e39d4861de6866512be8: Status 404 returned error can't find the container with id cd945be03fe25838394bfbb5cd25eb0e44dae27f2485e39d4861de6866512be8 Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.662281 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.662616 5106 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd" exitCode=1 Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.662757 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd"} Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.663329 5106 scope.go:117] "RemoveContainer" containerID="0812249f5017667208da844311c285a9577047b01b1a460853d5df18f11708dd" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.664220 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.665009 5106 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="2c19edd2f3b2821ebcafcf58f99ad139ec1b5f60c86bb9a578338c6802517b8e" exitCode=0 Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.665173 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"2c19edd2f3b2821ebcafcf58f99ad139ec1b5f60c86bb9a578338c6802517b8e"} Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.665214 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"cd945be03fe25838394bfbb5cd25eb0e44dae27f2485e39d4861de6866512be8"} Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.665560 5106 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.665609 5106 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:11 crc kubenswrapper[5106]: E0320 00:13:11.666145 5106 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.670754 5106 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.671328 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.672070 5106 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.672619 5106 status_manager.go:895] "Failed to get status for pod" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:11 crc kubenswrapper[5106]: I0320 00:13:11.673143 5106 status_manager.go:895] "Failed to get status for pod" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" pod="openshift-image-registry/image-pruner-29566080-czff6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-pruner-29566080-czff6\": dial tcp 38.102.83.150:6443: connect: connection refused" Mar 20 00:13:12 crc kubenswrapper[5106]: I0320 00:13:12.676170 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:13:12 crc kubenswrapper[5106]: I0320 00:13:12.676624 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5d50030eabbda1d7066069e8d922e83546e57934d29bc24ffed8d979c7ee5b7e"} Mar 20 00:13:12 crc kubenswrapper[5106]: I0320 00:13:12.690693 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"df55fd7ef04abdc93f6545c636b825552ce3b89ce69eacb2490f20d697ce2243"} Mar 20 00:13:12 crc kubenswrapper[5106]: I0320 00:13:12.690734 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"965809ff5fe2db6e2ba5886d77b7137d15617e5b5c3700b238e6f4dd19a77a22"} Mar 20 00:13:12 crc kubenswrapper[5106]: I0320 00:13:12.690743 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"9fc9f7932c7798a92b3f48129ef11bb677f41a9eec9ad702c229c5f9691d46c5"} Mar 20 00:13:12 crc kubenswrapper[5106]: I0320 00:13:12.690751 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6d7459c528bbe006cc223c14c932b9c572766e32f29caffafeede40f720cb54b"} Mar 20 00:13:13 crc kubenswrapper[5106]: I0320 00:13:13.711776 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"82aef283ba731eaf8f725e64b59c247c4b51b4c8fb352ac6840cf8eec3ca232c"} Mar 20 00:13:13 crc kubenswrapper[5106]: I0320 00:13:13.712881 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:13 crc kubenswrapper[5106]: I0320 00:13:13.713082 5106 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:13 crc kubenswrapper[5106]: I0320 00:13:13.713159 5106 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:16 crc kubenswrapper[5106]: I0320 00:13:16.180361 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:16 crc kubenswrapper[5106]: I0320 00:13:16.180427 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:16 crc kubenswrapper[5106]: I0320 00:13:16.189869 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:18 crc kubenswrapper[5106]: I0320 00:13:18.308822 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:13:18 crc kubenswrapper[5106]: I0320 00:13:18.316291 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:13:18 crc kubenswrapper[5106]: I0320 00:13:18.725833 5106 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:18 crc kubenswrapper[5106]: I0320 00:13:18.725864 5106 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:18 crc kubenswrapper[5106]: I0320 00:13:18.738197 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:13:19 crc kubenswrapper[5106]: I0320 00:13:19.744812 5106 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:19 crc kubenswrapper[5106]: I0320 00:13:19.744858 5106 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:19 crc kubenswrapper[5106]: I0320 00:13:19.750607 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:19 crc kubenswrapper[5106]: I0320 00:13:19.753544 5106 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="162323ee-0a8c-4e0b-ac58-9e02ba82aa4c" Mar 20 00:13:20 crc kubenswrapper[5106]: I0320 00:13:20.751445 5106 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:20 crc kubenswrapper[5106]: I0320 00:13:20.751808 5106 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f7bf04f-91df-48c2-916a-afe1e635b543" Mar 20 00:13:24 crc kubenswrapper[5106]: I0320 00:13:24.160240 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:24 crc kubenswrapper[5106]: I0320 00:13:24.161246 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:24 crc kubenswrapper[5106]: I0320 00:13:24.776518 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" event={"ID":"93e57ca7-278b-47c3-a3ae-7c07849de478","Type":"ContainerStarted","Data":"a3def783f21055b26a8af0860a2a43601d87b300576774d0a0ac5e2da78a4243"} Mar 20 00:13:25 crc kubenswrapper[5106]: I0320 00:13:25.373114 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:13:25 crc kubenswrapper[5106]: I0320 00:13:25.373179 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:13:25 crc kubenswrapper[5106]: I0320 00:13:25.784735 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/0.log" Mar 20 00:13:25 crc kubenswrapper[5106]: I0320 00:13:25.785110 5106 generic.go:358] "Generic (PLEG): container finished" podID="93e57ca7-278b-47c3-a3ae-7c07849de478" containerID="64c905abcb54f3d355bb0402e0eaa4e3ffa2daf216198058496edd6e6e59f80a" exitCode=255 Mar 20 00:13:25 crc kubenswrapper[5106]: I0320 00:13:25.785309 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" event={"ID":"93e57ca7-278b-47c3-a3ae-7c07849de478","Type":"ContainerDied","Data":"64c905abcb54f3d355bb0402e0eaa4e3ffa2daf216198058496edd6e6e59f80a"} Mar 20 00:13:25 crc kubenswrapper[5106]: I0320 00:13:25.786355 5106 scope.go:117] "RemoveContainer" containerID="64c905abcb54f3d355bb0402e0eaa4e3ffa2daf216198058496edd6e6e59f80a" Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.517643 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.517707 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.794962 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/0.log" Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.795493 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" event={"ID":"93e57ca7-278b-47c3-a3ae-7c07849de478","Type":"ContainerStarted","Data":"e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a"} Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.795928 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.852500 5106 patch_prober.go:28] interesting pod/oauth-openshift-575dc4b4cf-qlhmn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.66:6443/healthz\": read tcp 10.217.0.2:53330->10.217.0.66:6443: read: connection reset by peer" start-of-body= Mar 20 00:13:26 crc kubenswrapper[5106]: I0320 00:13:26.852649 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.66:6443/healthz\": read tcp 10.217.0.2:53330->10.217.0.66:6443: read: connection reset by peer" Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.192697 5106 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="162323ee-0a8c-4e0b-ac58-9e02ba82aa4c" Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.801554 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/1.log" Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.802180 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/0.log" Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.802225 5106 generic.go:358] "Generic (PLEG): container finished" podID="93e57ca7-278b-47c3-a3ae-7c07849de478" containerID="e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a" exitCode=255 Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.802350 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" event={"ID":"93e57ca7-278b-47c3-a3ae-7c07849de478","Type":"ContainerDied","Data":"e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a"} Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.802397 5106 scope.go:117] "RemoveContainer" containerID="64c905abcb54f3d355bb0402e0eaa4e3ffa2daf216198058496edd6e6e59f80a" Mar 20 00:13:27 crc kubenswrapper[5106]: I0320 00:13:27.803147 5106 scope.go:117] "RemoveContainer" containerID="e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a" Mar 20 00:13:27 crc kubenswrapper[5106]: E0320 00:13:27.803513 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:28 crc kubenswrapper[5106]: I0320 00:13:28.710662 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Mar 20 00:13:28 crc kubenswrapper[5106]: I0320 00:13:28.810759 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/1.log" Mar 20 00:13:28 crc kubenswrapper[5106]: I0320 00:13:28.811385 5106 scope.go:117] "RemoveContainer" containerID="e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a" Mar 20 00:13:28 crc kubenswrapper[5106]: E0320 00:13:28.811659 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:28 crc kubenswrapper[5106]: I0320 00:13:28.836918 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.339535 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.540003 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.605348 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.702957 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.749054 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.884726 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Mar 20 00:13:29 crc kubenswrapper[5106]: I0320 00:13:29.964375 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Mar 20 00:13:30 crc kubenswrapper[5106]: I0320 00:13:30.122986 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Mar 20 00:13:30 crc kubenswrapper[5106]: I0320 00:13:30.339249 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Mar 20 00:13:30 crc kubenswrapper[5106]: I0320 00:13:30.439266 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Mar 20 00:13:30 crc kubenswrapper[5106]: I0320 00:13:30.457874 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Mar 20 00:13:30 crc kubenswrapper[5106]: I0320 00:13:30.511935 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.255096 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.348275 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.362420 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.362908 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.469157 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.486517 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.497971 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.514996 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.609145 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.665842 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.667031 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.686527 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.828308 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.828930 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.835719 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Mar 20 00:13:31 crc kubenswrapper[5106]: I0320 00:13:31.891788 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.043121 5106 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.259825 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.272946 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.314020 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.371644 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.434813 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.435162 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.587651 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.592028 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.639427 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.683455 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.700055 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.723223 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.737270 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.833010 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:32 crc kubenswrapper[5106]: I0320 00:13:32.898120 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.088891 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.092740 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.143051 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.191033 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.203876 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.210893 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.312720 5106 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.326945 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.383523 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.428097 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.641539 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.643300 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.665953 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.738625 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.756151 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.797908 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.826763 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.827390 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.888059 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.932704 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Mar 20 00:13:33 crc kubenswrapper[5106]: I0320 00:13:33.958581 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.060503 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.063153 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.112957 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.132202 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.281230 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.344159 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.362950 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.406602 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.420990 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.439390 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.444005 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.460291 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.541878 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.607131 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.633437 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.744423 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.833346 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.848318 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.859533 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.909895 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.923168 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Mar 20 00:13:34 crc kubenswrapper[5106]: I0320 00:13:34.947133 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.068243 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.111007 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.122191 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.122774 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.158303 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.259963 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.260106 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.369276 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.395220 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.412132 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.420205 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.517534 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.524031 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.552855 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.582135 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.668862 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.670182 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.747894 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.757974 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.895756 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Mar 20 00:13:35 crc kubenswrapper[5106]: I0320 00:13:35.969590 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.074549 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.198743 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.257693 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.384490 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.409515 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.417305 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.437199 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.500946 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.517204 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.518697 5106 scope.go:117] "RemoveContainer" containerID="e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a" Mar 20 00:13:36 crc kubenswrapper[5106]: E0320 00:13:36.519328 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.539272 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.559873 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.654397 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.681801 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.752374 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.796122 5106 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.833981 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.865023 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Mar 20 00:13:36 crc kubenswrapper[5106]: I0320 00:13:36.929373 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.004570 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.070606 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.116116 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.169069 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.175922 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.337949 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.420011 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.481087 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.532153 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.546366 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.552476 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.596742 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.641356 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.654913 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.659464 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.668392 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.670332 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.672863 5106 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.708247 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.710220 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.713679 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.805043 5106 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.809358 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.809397 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.809411 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn"] Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.810019 5106 scope.go:117] "RemoveContainer" containerID="e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.818033 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.844127 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.844103403 podStartE2EDuration="19.844103403s" podCreationTimestamp="2026-03-20 00:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:13:37.834535338 +0000 UTC m=+272.268269402" watchObservedRunningTime="2026-03-20 00:13:37.844103403 +0000 UTC m=+272.277837487" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.860002 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.982849 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Mar 20 00:13:37 crc kubenswrapper[5106]: I0320 00:13:37.987702 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.064516 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.083457 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.093294 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.106136 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.137108 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.207520 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.214883 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.338818 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.368002 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.377483 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.402414 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.426293 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.429686 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.434513 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.470694 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.503416 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.604934 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.628274 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.751562 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.886875 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.888422 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/1.log" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.888463 5106 generic.go:358] "Generic (PLEG): container finished" podID="93e57ca7-278b-47c3-a3ae-7c07849de478" containerID="2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c" exitCode=255 Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.888530 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" event={"ID":"93e57ca7-278b-47c3-a3ae-7c07849de478","Type":"ContainerDied","Data":"2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c"} Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.888617 5106 scope.go:117] "RemoveContainer" containerID="e67325bb33417c49297a6b16814f0f28b0d541cf4d2c5284d83b940eb281f40a" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.890224 5106 scope.go:117] "RemoveContainer" containerID="2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c" Mar 20 00:13:38 crc kubenswrapper[5106]: E0320 00:13:38.890656 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.893955 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.931751 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.954560 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:38 crc kubenswrapper[5106]: I0320 00:13:38.968922 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.029802 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.045299 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.108931 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.198058 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.249310 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.273684 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.338264 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.522357 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.570439 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.594774 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.627925 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.738436 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.743539 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.840106 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.852976 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.894692 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:13:39 crc kubenswrapper[5106]: I0320 00:13:39.899821 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.001589 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.044132 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.045574 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.097972 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.286329 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.294890 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.407796 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.523197 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.542187 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.709989 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.774684 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.897480 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8"] Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.897814 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" podUID="8bac7145-7e39-408b-bc0f-5971365fc72c" containerName="route-controller-manager" containerID="cri-o://cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476" gracePeriod=30 Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.901381 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b4599bc8-hpfh7"] Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.901656 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" podUID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" containerName="controller-manager" containerID="cri-o://17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc" gracePeriod=30 Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.964561 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Mar 20 00:13:40 crc kubenswrapper[5106]: I0320 00:13:40.992200 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.014826 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.082047 5106 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.082621 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c" gracePeriod=5 Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.086150 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.087968 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.094077 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.273198 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302285 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302846 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" containerName="installer" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302862 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" containerName="installer" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302878 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302885 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302903 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" containerName="image-pruner" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302908 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" containerName="image-pruner" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302921 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8bac7145-7e39-408b-bc0f-5971365fc72c" containerName="route-controller-manager" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.302926 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bac7145-7e39-408b-bc0f-5971365fc72c" containerName="route-controller-manager" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.303009 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.303021 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="316971ca-bb80-40a7-9f09-fe5ef9fb388b" containerName="installer" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.303030 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="884b9b2b-1ff2-4758-b964-5030e8973573" containerName="image-pruner" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.303038 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="8bac7145-7e39-408b-bc0f-5971365fc72c" containerName="route-controller-manager" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.310027 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.341538 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.404206 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406002 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bac7145-7e39-408b-bc0f-5971365fc72c-serving-cert\") pod \"8bac7145-7e39-408b-bc0f-5971365fc72c\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406044 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrzjf\" (UniqueName: \"kubernetes.io/projected/8bac7145-7e39-408b-bc0f-5971365fc72c-kube-api-access-mrzjf\") pod \"8bac7145-7e39-408b-bc0f-5971365fc72c\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406086 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8bac7145-7e39-408b-bc0f-5971365fc72c-tmp\") pod \"8bac7145-7e39-408b-bc0f-5971365fc72c\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406118 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-config\") pod \"8bac7145-7e39-408b-bc0f-5971365fc72c\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406159 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-client-ca\") pod \"8bac7145-7e39-408b-bc0f-5971365fc72c\" (UID: \"8bac7145-7e39-408b-bc0f-5971365fc72c\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406262 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-client-ca\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406310 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-config\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406331 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-serving-cert\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406363 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-tmp\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.406394 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq2kw\" (UniqueName: \"kubernetes.io/projected/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-kube-api-access-hq2kw\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.408681 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8bac7145-7e39-408b-bc0f-5971365fc72c" (UID: "8bac7145-7e39-408b-bc0f-5971365fc72c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.408945 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bac7145-7e39-408b-bc0f-5971365fc72c-tmp" (OuterVolumeSpecName: "tmp") pod "8bac7145-7e39-408b-bc0f-5971365fc72c" (UID: "8bac7145-7e39-408b-bc0f-5971365fc72c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.409347 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-config" (OuterVolumeSpecName: "config") pod "8bac7145-7e39-408b-bc0f-5971365fc72c" (UID: "8bac7145-7e39-408b-bc0f-5971365fc72c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.412246 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bac7145-7e39-408b-bc0f-5971365fc72c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8bac7145-7e39-408b-bc0f-5971365fc72c" (UID: "8bac7145-7e39-408b-bc0f-5971365fc72c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.412326 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bac7145-7e39-408b-bc0f-5971365fc72c-kube-api-access-mrzjf" (OuterVolumeSpecName: "kube-api-access-mrzjf") pod "8bac7145-7e39-408b-bc0f-5971365fc72c" (UID: "8bac7145-7e39-408b-bc0f-5971365fc72c"). InnerVolumeSpecName "kube-api-access-mrzjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.431769 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-djj8n"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.432508 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" containerName="controller-manager" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.432528 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" containerName="controller-manager" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.432647 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" containerName="controller-manager" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.437551 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.439722 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.441327 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507351 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-tmp\") pod \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507421 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-serving-cert\") pod \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507463 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-proxy-ca-bundles\") pod \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507513 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjh62\" (UniqueName: \"kubernetes.io/projected/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-kube-api-access-fjh62\") pod \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507560 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-client-ca\") pod \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507600 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-config\") pod \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\" (UID: \"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd\") " Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507756 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-client-ca\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507800 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-config\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507824 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-serving-cert\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507855 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-tmp\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507889 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hq2kw\" (UniqueName: \"kubernetes.io/projected/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-kube-api-access-hq2kw\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507934 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507945 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bac7145-7e39-408b-bc0f-5971365fc72c-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507955 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bac7145-7e39-408b-bc0f-5971365fc72c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507964 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mrzjf\" (UniqueName: \"kubernetes.io/projected/8bac7145-7e39-408b-bc0f-5971365fc72c-kube-api-access-mrzjf\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.507975 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8bac7145-7e39-408b-bc0f-5971365fc72c-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.508754 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-client-ca" (OuterVolumeSpecName: "client-ca") pod "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" (UID: "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.508834 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-config" (OuterVolumeSpecName: "config") pod "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" (UID: "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.509092 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-tmp" (OuterVolumeSpecName: "tmp") pod "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" (UID: "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.509839 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-client-ca\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.510781 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-config\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.511757 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" (UID: "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.512105 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" (UID: "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.512305 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-tmp\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.515293 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-serving-cert\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.515461 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-kube-api-access-fjh62" (OuterVolumeSpecName: "kube-api-access-fjh62") pod "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" (UID: "7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd"). InnerVolumeSpecName "kube-api-access-fjh62". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.531798 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq2kw\" (UniqueName: \"kubernetes.io/projected/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-kube-api-access-hq2kw\") pod \"route-controller-manager-754795cf47-rrlcv\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.610199 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b60199-785b-4290-82f8-c4514023c0b4-serving-cert\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.610582 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-client-ca\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.610646 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-proxy-ca-bundles\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.610675 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29fkv\" (UniqueName: \"kubernetes.io/projected/45b60199-785b-4290-82f8-c4514023c0b4-kube-api-access-29fkv\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.610853 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-config\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.610941 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45b60199-785b-4290-82f8-c4514023c0b4-tmp\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.611059 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.611078 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.611097 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.611107 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjh62\" (UniqueName: \"kubernetes.io/projected/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-kube-api-access-fjh62\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.611116 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.611124 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.623921 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.707253 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.710794 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.711995 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-config\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.712043 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45b60199-785b-4290-82f8-c4514023c0b4-tmp\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.712094 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b60199-785b-4290-82f8-c4514023c0b4-serving-cert\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.712123 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-client-ca\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.712167 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-proxy-ca-bundles\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.712190 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29fkv\" (UniqueName: \"kubernetes.io/projected/45b60199-785b-4290-82f8-c4514023c0b4-kube-api-access-29fkv\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.713157 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45b60199-785b-4290-82f8-c4514023c0b4-tmp\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.714374 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-client-ca\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.714487 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-config\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.714743 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-proxy-ca-bundles\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.717490 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b60199-785b-4290-82f8-c4514023c0b4-serving-cert\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.727938 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29fkv\" (UniqueName: \"kubernetes.io/projected/45b60199-785b-4290-82f8-c4514023c0b4-kube-api-access-29fkv\") pod \"controller-manager-5c44cc568c-djj8n\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.756007 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.767838 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.891172 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.905718 5106 generic.go:358] "Generic (PLEG): container finished" podID="8bac7145-7e39-408b-bc0f-5971365fc72c" containerID="cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476" exitCode=0 Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.905947 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" event={"ID":"8bac7145-7e39-408b-bc0f-5971365fc72c","Type":"ContainerDied","Data":"cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476"} Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.905986 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" event={"ID":"8bac7145-7e39-408b-bc0f-5971365fc72c","Type":"ContainerDied","Data":"ebee5db7b3ca8d47cc11827400b57b69002b4303ee0bf001d41fd80c2d9ae6fb"} Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.906006 5106 scope.go:117] "RemoveContainer" containerID="cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.906139 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.911134 5106 generic.go:358] "Generic (PLEG): container finished" podID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" containerID="17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc" exitCode=0 Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.911208 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.911236 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" event={"ID":"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd","Type":"ContainerDied","Data":"17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc"} Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.911274 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8b4599bc8-hpfh7" event={"ID":"7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd","Type":"ContainerDied","Data":"44bf4dc970c40aca8e6050055617d2c62394bee182c39d99ab2e1c45aa1c06ef"} Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.926792 5106 scope.go:117] "RemoveContainer" containerID="cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476" Mar 20 00:13:41 crc kubenswrapper[5106]: E0320 00:13:41.927403 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476\": container with ID starting with cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476 not found: ID does not exist" containerID="cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.927435 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476"} err="failed to get container status \"cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476\": rpc error: code = NotFound desc = could not find container \"cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476\": container with ID starting with cadc69fcab1ef0cb4332a3e3fa8f3efdcba950572f6f3d87c9e0c265ca807476 not found: ID does not exist" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.927456 5106 scope.go:117] "RemoveContainer" containerID="17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.944535 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.945741 5106 scope.go:117] "RemoveContainer" containerID="17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc" Mar 20 00:13:41 crc kubenswrapper[5106]: E0320 00:13:41.946023 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc\": container with ID starting with 17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc not found: ID does not exist" containerID="17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.946047 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc"} err="failed to get container status \"17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc\": rpc error: code = NotFound desc = could not find container \"17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc\": container with ID starting with 17330fb0bb76909d3ac5948b97bb40d4f17351f116635b7dfeb9ac9d6e35abdc not found: ID does not exist" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.949388 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.957937 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d9498-dfsw8"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.964105 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.964338 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8b4599bc8-hpfh7"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.967677 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8b4599bc8-hpfh7"] Mar 20 00:13:41 crc kubenswrapper[5106]: I0320 00:13:41.972349 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.008768 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.146346 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.185788 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.193716 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.208538 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.327397 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.426118 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.527423 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.547867 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.636966 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.640700 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.705416 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.717794 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.762929 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.805732 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.898429 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.917991 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Mar 20 00:13:42 crc kubenswrapper[5106]: I0320 00:13:42.927792 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.137925 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.161809 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.174250 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd" path="/var/lib/kubelet/pods/7fac4fd9-0edd-4f00-bd48-25ae0a0e18dd/volumes" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.175693 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bac7145-7e39-408b-bc0f-5971365fc72c" path="/var/lib/kubelet/pods/8bac7145-7e39-408b-bc0f-5971365fc72c/volumes" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.273168 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.348411 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.404043 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.470166 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.499892 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.611820 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.614213 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.638661 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.820286 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.880235 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-djj8n"] Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.891359 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Mar 20 00:13:43 crc kubenswrapper[5106]: I0320 00:13:43.891391 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv"] Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.131904 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.172331 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.213020 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-djj8n"] Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.264729 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv"] Mar 20 00:13:44 crc kubenswrapper[5106]: W0320 00:13:44.268921 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6aa38f1_9f77_47d2_a5d9_206924aa18ad.slice/crio-a4d06039abf3b3b4242b0960ef38ac6a59f04e37ceb11e8e07604c05b248e27a WatchSource:0}: Error finding container a4d06039abf3b3b4242b0960ef38ac6a59f04e37ceb11e8e07604c05b248e27a: Status 404 returned error can't find the container with id a4d06039abf3b3b4242b0960ef38ac6a59f04e37ceb11e8e07604c05b248e27a Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.859857 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.939685 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" event={"ID":"b6aa38f1-9f77-47d2-a5d9-206924aa18ad","Type":"ContainerStarted","Data":"ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275"} Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.939741 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" event={"ID":"b6aa38f1-9f77-47d2-a5d9-206924aa18ad","Type":"ContainerStarted","Data":"a4d06039abf3b3b4242b0960ef38ac6a59f04e37ceb11e8e07604c05b248e27a"} Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.939761 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.942117 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" event={"ID":"45b60199-785b-4290-82f8-c4514023c0b4","Type":"ContainerStarted","Data":"794949aecd422f6fcb65f05fc9d559241322768826c3817d557c926a3ccdb32f"} Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.942152 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" event={"ID":"45b60199-785b-4290-82f8-c4514023c0b4","Type":"ContainerStarted","Data":"fc1305e80f99a6ed64bd46f984c878d2f45fe10d82689e7fded70ef441206e0e"} Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.942371 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.947037 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:13:44 crc kubenswrapper[5106]: I0320 00:13:44.963489 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" podStartSLOduration=4.963464733 podStartE2EDuration="4.963464733s" podCreationTimestamp="2026-03-20 00:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:13:44.957701925 +0000 UTC m=+279.391435979" watchObservedRunningTime="2026-03-20 00:13:44.963464733 +0000 UTC m=+279.397198817" Mar 20 00:13:45 crc kubenswrapper[5106]: I0320 00:13:45.345357 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:13:45 crc kubenswrapper[5106]: I0320 00:13:45.349806 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Mar 20 00:13:45 crc kubenswrapper[5106]: I0320 00:13:45.367869 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" podStartSLOduration=4.36784885 podStartE2EDuration="4.36784885s" podCreationTimestamp="2026-03-20 00:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:13:45.009323694 +0000 UTC m=+279.443057768" watchObservedRunningTime="2026-03-20 00:13:45.36784885 +0000 UTC m=+279.801582914" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.517549 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.518248 5106 scope.go:117] "RemoveContainer" containerID="2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c" Mar 20 00:13:46 crc kubenswrapper[5106]: E0320 00:13:46.518510 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.562838 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.667287 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.667358 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.668954 5106 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.796698 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802466 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802520 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802545 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802563 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802649 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802661 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802698 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802726 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802745 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.802981 5106 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.803000 5106 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.803013 5106 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.803024 5106 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.812314 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.903864 5106 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.954818 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.954861 5106 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c" exitCode=137 Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.954932 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.955029 5106 scope.go:117] "RemoveContainer" containerID="2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.955643 5106 scope.go:117] "RemoveContainer" containerID="2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c" Mar 20 00:13:46 crc kubenswrapper[5106]: E0320 00:13:46.955892 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.971391 5106 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.974165 5106 scope.go:117] "RemoveContainer" containerID="2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c" Mar 20 00:13:46 crc kubenswrapper[5106]: E0320 00:13:46.974611 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c\": container with ID starting with 2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c not found: ID does not exist" containerID="2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c" Mar 20 00:13:46 crc kubenswrapper[5106]: I0320 00:13:46.974658 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c"} err="failed to get container status \"2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c\": rpc error: code = NotFound desc = could not find container \"2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c\": container with ID starting with 2c9229264cd07a0f8781f1c300425e0708c1f43ecda16b1d214fed2e3217ab0c not found: ID does not exist" Mar 20 00:13:47 crc kubenswrapper[5106]: I0320 00:13:47.169486 5106 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Mar 20 00:13:47 crc kubenswrapper[5106]: I0320 00:13:47.173337 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Mar 20 00:13:55 crc kubenswrapper[5106]: I0320 00:13:55.373140 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:13:55 crc kubenswrapper[5106]: I0320 00:13:55.373250 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:13:55 crc kubenswrapper[5106]: I0320 00:13:55.373302 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:13:55 crc kubenswrapper[5106]: I0320 00:13:55.374028 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:13:55 crc kubenswrapper[5106]: I0320 00:13:55.374127 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c" gracePeriod=600 Mar 20 00:13:56 crc kubenswrapper[5106]: I0320 00:13:56.007089 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c" exitCode=0 Mar 20 00:13:56 crc kubenswrapper[5106]: I0320 00:13:56.007182 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c"} Mar 20 00:13:56 crc kubenswrapper[5106]: I0320 00:13:56.007767 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"6663498fde38077516653246979f4890e53ab8554f504d980573cec239ee48c3"} Mar 20 00:13:57 crc kubenswrapper[5106]: I0320 00:13:57.174179 5106 scope.go:117] "RemoveContainer" containerID="2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c" Mar 20 00:13:57 crc kubenswrapper[5106]: E0320 00:13:57.174740 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-575dc4b4cf-qlhmn_openshift-authentication(93e57ca7-278b-47c3-a3ae-7c07849de478)\"" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podUID="93e57ca7-278b-47c3-a3ae-7c07849de478" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.142589 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566094-5fcbx"] Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.147357 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566094-5fcbx"] Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.147423 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.149795 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.150910 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.151269 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.275061 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwj9\" (UniqueName: \"kubernetes.io/projected/0e82d111-0784-4d7b-baf1-02bd935d69e6-kube-api-access-lmwj9\") pod \"auto-csr-approver-29566094-5fcbx\" (UID: \"0e82d111-0784-4d7b-baf1-02bd935d69e6\") " pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.376826 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwj9\" (UniqueName: \"kubernetes.io/projected/0e82d111-0784-4d7b-baf1-02bd935d69e6-kube-api-access-lmwj9\") pod \"auto-csr-approver-29566094-5fcbx\" (UID: \"0e82d111-0784-4d7b-baf1-02bd935d69e6\") " pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.395351 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwj9\" (UniqueName: \"kubernetes.io/projected/0e82d111-0784-4d7b-baf1-02bd935d69e6-kube-api-access-lmwj9\") pod \"auto-csr-approver-29566094-5fcbx\" (UID: \"0e82d111-0784-4d7b-baf1-02bd935d69e6\") " pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.466386 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.861523 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566094-5fcbx"] Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.867754 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-djj8n"] Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.868182 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" podUID="45b60199-785b-4290-82f8-c4514023c0b4" containerName="controller-manager" containerID="cri-o://794949aecd422f6fcb65f05fc9d559241322768826c3817d557c926a3ccdb32f" gracePeriod=30 Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.877280 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv"] Mar 20 00:14:00 crc kubenswrapper[5106]: I0320 00:14:00.877517 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" podUID="b6aa38f1-9f77-47d2-a5d9-206924aa18ad" containerName="route-controller-manager" containerID="cri-o://ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275" gracePeriod=30 Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.037311 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" event={"ID":"0e82d111-0784-4d7b-baf1-02bd935d69e6","Type":"ContainerStarted","Data":"1afe43865da2b83d7e097d185a6884c2cee1e6a1d67ae28f75707b75501a38c2"} Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.810836 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.835130 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj"] Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.835734 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6aa38f1-9f77-47d2-a5d9-206924aa18ad" containerName="route-controller-manager" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.835746 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6aa38f1-9f77-47d2-a5d9-206924aa18ad" containerName="route-controller-manager" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.835846 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6aa38f1-9f77-47d2-a5d9-206924aa18ad" containerName="route-controller-manager" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.897790 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-tmp\") pod \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.897859 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-serving-cert\") pod \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.897997 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-client-ca\") pod \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.898043 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-config\") pod \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.898363 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-tmp" (OuterVolumeSpecName: "tmp") pod "b6aa38f1-9f77-47d2-a5d9-206924aa18ad" (UID: "b6aa38f1-9f77-47d2-a5d9-206924aa18ad"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.898727 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6aa38f1-9f77-47d2-a5d9-206924aa18ad" (UID: "b6aa38f1-9f77-47d2-a5d9-206924aa18ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.898887 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-config" (OuterVolumeSpecName: "config") pod "b6aa38f1-9f77-47d2-a5d9-206924aa18ad" (UID: "b6aa38f1-9f77-47d2-a5d9-206924aa18ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.898977 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq2kw\" (UniqueName: \"kubernetes.io/projected/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-kube-api-access-hq2kw\") pod \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\" (UID: \"b6aa38f1-9f77-47d2-a5d9-206924aa18ad\") " Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.899231 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.899258 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.899271 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.917816 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-kube-api-access-hq2kw" (OuterVolumeSpecName: "kube-api-access-hq2kw") pod "b6aa38f1-9f77-47d2-a5d9-206924aa18ad" (UID: "b6aa38f1-9f77-47d2-a5d9-206924aa18ad"). InnerVolumeSpecName "kube-api-access-hq2kw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.918727 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6aa38f1-9f77-47d2-a5d9-206924aa18ad" (UID: "b6aa38f1-9f77-47d2-a5d9-206924aa18ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.930034 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj"] Mar 20 00:14:01 crc kubenswrapper[5106]: I0320 00:14:01.930190 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.000379 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.000815 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hq2kw\" (UniqueName: \"kubernetes.io/projected/b6aa38f1-9f77-47d2-a5d9-206924aa18ad-kube-api-access-hq2kw\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.043703 5106 generic.go:358] "Generic (PLEG): container finished" podID="45b60199-785b-4290-82f8-c4514023c0b4" containerID="794949aecd422f6fcb65f05fc9d559241322768826c3817d557c926a3ccdb32f" exitCode=0 Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.043803 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" event={"ID":"45b60199-785b-4290-82f8-c4514023c0b4","Type":"ContainerDied","Data":"794949aecd422f6fcb65f05fc9d559241322768826c3817d557c926a3ccdb32f"} Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.045452 5106 generic.go:358] "Generic (PLEG): container finished" podID="b6aa38f1-9f77-47d2-a5d9-206924aa18ad" containerID="ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275" exitCode=0 Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.045509 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" event={"ID":"b6aa38f1-9f77-47d2-a5d9-206924aa18ad","Type":"ContainerDied","Data":"ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275"} Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.045528 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" event={"ID":"b6aa38f1-9f77-47d2-a5d9-206924aa18ad","Type":"ContainerDied","Data":"a4d06039abf3b3b4242b0960ef38ac6a59f04e37ceb11e8e07604c05b248e27a"} Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.045543 5106 scope.go:117] "RemoveContainer" containerID="ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.045556 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.063688 5106 scope.go:117] "RemoveContainer" containerID="ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275" Mar 20 00:14:02 crc kubenswrapper[5106]: E0320 00:14:02.064244 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275\": container with ID starting with ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275 not found: ID does not exist" containerID="ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.064271 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275"} err="failed to get container status \"ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275\": rpc error: code = NotFound desc = could not find container \"ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275\": container with ID starting with ca909a6fa2f70377343ab1828d8f723cdecbaea249c46120df17953c37049275 not found: ID does not exist" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.077922 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv"] Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.081766 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-rrlcv"] Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.102547 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-client-ca\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.102799 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14ea786b-10b5-4c0e-8cda-464f2db14788-tmp\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.102857 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62rd\" (UniqueName: \"kubernetes.io/projected/14ea786b-10b5-4c0e-8cda-464f2db14788-kube-api-access-p62rd\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.103123 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-config\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.103262 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ea786b-10b5-4c0e-8cda-464f2db14788-serving-cert\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.190500 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.204595 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ea786b-10b5-4c0e-8cda-464f2db14788-serving-cert\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.204643 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-client-ca\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.204679 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14ea786b-10b5-4c0e-8cda-464f2db14788-tmp\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.204697 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p62rd\" (UniqueName: \"kubernetes.io/projected/14ea786b-10b5-4c0e-8cda-464f2db14788-kube-api-access-p62rd\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.204740 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-config\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.205905 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-client-ca\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.205943 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14ea786b-10b5-4c0e-8cda-464f2db14788-tmp\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.205967 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-config\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.212093 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ea786b-10b5-4c0e-8cda-464f2db14788-serving-cert\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.221534 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fb767755d-6pnv8"] Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.222141 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45b60199-785b-4290-82f8-c4514023c0b4" containerName="controller-manager" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.222163 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b60199-785b-4290-82f8-c4514023c0b4" containerName="controller-manager" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.222270 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="45b60199-785b-4290-82f8-c4514023c0b4" containerName="controller-manager" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.223910 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p62rd\" (UniqueName: \"kubernetes.io/projected/14ea786b-10b5-4c0e-8cda-464f2db14788-kube-api-access-p62rd\") pod \"route-controller-manager-84bcc8595b-cfjhj\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.249191 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.257804 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb767755d-6pnv8"] Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.258188 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.305988 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-config\") pod \"45b60199-785b-4290-82f8-c4514023c0b4\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.306057 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b60199-785b-4290-82f8-c4514023c0b4-serving-cert\") pod \"45b60199-785b-4290-82f8-c4514023c0b4\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.306082 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-client-ca\") pod \"45b60199-785b-4290-82f8-c4514023c0b4\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.306158 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45b60199-785b-4290-82f8-c4514023c0b4-tmp\") pod \"45b60199-785b-4290-82f8-c4514023c0b4\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.306186 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29fkv\" (UniqueName: \"kubernetes.io/projected/45b60199-785b-4290-82f8-c4514023c0b4-kube-api-access-29fkv\") pod \"45b60199-785b-4290-82f8-c4514023c0b4\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.306206 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-proxy-ca-bundles\") pod \"45b60199-785b-4290-82f8-c4514023c0b4\" (UID: \"45b60199-785b-4290-82f8-c4514023c0b4\") " Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.307135 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-client-ca" (OuterVolumeSpecName: "client-ca") pod "45b60199-785b-4290-82f8-c4514023c0b4" (UID: "45b60199-785b-4290-82f8-c4514023c0b4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.307222 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b60199-785b-4290-82f8-c4514023c0b4-tmp" (OuterVolumeSpecName: "tmp") pod "45b60199-785b-4290-82f8-c4514023c0b4" (UID: "45b60199-785b-4290-82f8-c4514023c0b4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.307400 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-config" (OuterVolumeSpecName: "config") pod "45b60199-785b-4290-82f8-c4514023c0b4" (UID: "45b60199-785b-4290-82f8-c4514023c0b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.307626 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "45b60199-785b-4290-82f8-c4514023c0b4" (UID: "45b60199-785b-4290-82f8-c4514023c0b4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.309626 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b60199-785b-4290-82f8-c4514023c0b4-kube-api-access-29fkv" (OuterVolumeSpecName: "kube-api-access-29fkv") pod "45b60199-785b-4290-82f8-c4514023c0b4" (UID: "45b60199-785b-4290-82f8-c4514023c0b4"). InnerVolumeSpecName "kube-api-access-29fkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.310944 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b60199-785b-4290-82f8-c4514023c0b4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45b60199-785b-4290-82f8-c4514023c0b4" (UID: "45b60199-785b-4290-82f8-c4514023c0b4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407234 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-config\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407408 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7610e477-a93a-43b7-9eb2-8600a48ecac9-serving-cert\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407564 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7610e477-a93a-43b7-9eb2-8600a48ecac9-tmp\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407671 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-client-ca\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407807 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96qk\" (UniqueName: \"kubernetes.io/projected/7610e477-a93a-43b7-9eb2-8600a48ecac9-kube-api-access-w96qk\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407858 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-proxy-ca-bundles\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407937 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/45b60199-785b-4290-82f8-c4514023c0b4-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.407987 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29fkv\" (UniqueName: \"kubernetes.io/projected/45b60199-785b-4290-82f8-c4514023c0b4-kube-api-access-29fkv\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.408001 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.408012 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.408020 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b60199-785b-4290-82f8-c4514023c0b4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.408029 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b60199-785b-4290-82f8-c4514023c0b4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.508734 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w96qk\" (UniqueName: \"kubernetes.io/projected/7610e477-a93a-43b7-9eb2-8600a48ecac9-kube-api-access-w96qk\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.508788 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-proxy-ca-bundles\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.508833 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-config\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.508886 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7610e477-a93a-43b7-9eb2-8600a48ecac9-serving-cert\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.508943 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7610e477-a93a-43b7-9eb2-8600a48ecac9-tmp\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.508968 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-client-ca\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.509984 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7610e477-a93a-43b7-9eb2-8600a48ecac9-tmp\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.510246 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-client-ca\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.510382 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-proxy-ca-bundles\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.511733 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-config\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.513719 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7610e477-a93a-43b7-9eb2-8600a48ecac9-serving-cert\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.526773 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w96qk\" (UniqueName: \"kubernetes.io/projected/7610e477-a93a-43b7-9eb2-8600a48ecac9-kube-api-access-w96qk\") pod \"controller-manager-5fb767755d-6pnv8\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:02 crc kubenswrapper[5106]: I0320 00:14:02.573739 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.037238 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb767755d-6pnv8"] Mar 20 00:14:03 crc kubenswrapper[5106]: W0320 00:14:03.042308 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7610e477_a93a_43b7_9eb2_8600a48ecac9.slice/crio-511b872f64ccaec1a4703f07c9357b43d0f1b3a8eb13ac3cde0cac2e844a3eb2 WatchSource:0}: Error finding container 511b872f64ccaec1a4703f07c9357b43d0f1b3a8eb13ac3cde0cac2e844a3eb2: Status 404 returned error can't find the container with id 511b872f64ccaec1a4703f07c9357b43d0f1b3a8eb13ac3cde0cac2e844a3eb2 Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.055023 5106 generic.go:358] "Generic (PLEG): container finished" podID="59096bb7-5757-4196-96a5-f14e967998e7" containerID="09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639" exitCode=0 Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.055145 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" event={"ID":"59096bb7-5757-4196-96a5-f14e967998e7","Type":"ContainerDied","Data":"09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639"} Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.055656 5106 scope.go:117] "RemoveContainer" containerID="09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639" Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.056712 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.056761 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c44cc568c-djj8n" event={"ID":"45b60199-785b-4290-82f8-c4514023c0b4","Type":"ContainerDied","Data":"fc1305e80f99a6ed64bd46f984c878d2f45fe10d82689e7fded70ef441206e0e"} Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.056815 5106 scope.go:117] "RemoveContainer" containerID="794949aecd422f6fcb65f05fc9d559241322768826c3817d557c926a3ccdb32f" Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.058822 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" event={"ID":"7610e477-a93a-43b7-9eb2-8600a48ecac9","Type":"ContainerStarted","Data":"511b872f64ccaec1a4703f07c9357b43d0f1b3a8eb13ac3cde0cac2e844a3eb2"} Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.101076 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj"] Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.106051 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-djj8n"] Mar 20 00:14:03 crc kubenswrapper[5106]: W0320 00:14:03.109630 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14ea786b_10b5_4c0e_8cda_464f2db14788.slice/crio-6bbc2cd5863932c61946a7636bfe9d5e875440dfe8b6298937cadfa7a78063d3 WatchSource:0}: Error finding container 6bbc2cd5863932c61946a7636bfe9d5e875440dfe8b6298937cadfa7a78063d3: Status 404 returned error can't find the container with id 6bbc2cd5863932c61946a7636bfe9d5e875440dfe8b6298937cadfa7a78063d3 Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.114603 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-djj8n"] Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.273011 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b60199-785b-4290-82f8-c4514023c0b4" path="/var/lib/kubelet/pods/45b60199-785b-4290-82f8-c4514023c0b4/volumes" Mar 20 00:14:03 crc kubenswrapper[5106]: I0320 00:14:03.273767 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6aa38f1-9f77-47d2-a5d9-206924aa18ad" path="/var/lib/kubelet/pods/b6aa38f1-9f77-47d2-a5d9-206924aa18ad/volumes" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.089848 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" event={"ID":"14ea786b-10b5-4c0e-8cda-464f2db14788","Type":"ContainerStarted","Data":"7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837"} Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.090180 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" event={"ID":"14ea786b-10b5-4c0e-8cda-464f2db14788","Type":"ContainerStarted","Data":"6bbc2cd5863932c61946a7636bfe9d5e875440dfe8b6298937cadfa7a78063d3"} Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.093257 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.110034 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" event={"ID":"7610e477-a93a-43b7-9eb2-8600a48ecac9","Type":"ContainerStarted","Data":"f2d8022090426685795644a9e7412e6ca8ac6a031cfc1658a530f4bded34828a"} Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.111050 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.113880 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" podStartSLOduration=4.113863064 podStartE2EDuration="4.113863064s" podCreationTimestamp="2026-03-20 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:14:04.110758405 +0000 UTC m=+298.544492489" watchObservedRunningTime="2026-03-20 00:14:04.113863064 +0000 UTC m=+298.547597118" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.115517 5106 generic.go:358] "Generic (PLEG): container finished" podID="0e82d111-0784-4d7b-baf1-02bd935d69e6" containerID="64f2cb73e924e97def5a675deff985559984e33c75735c57fe4632a9b205af9d" exitCode=0 Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.115715 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" event={"ID":"0e82d111-0784-4d7b-baf1-02bd935d69e6","Type":"ContainerDied","Data":"64f2cb73e924e97def5a675deff985559984e33c75735c57fe4632a9b205af9d"} Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.119715 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" event={"ID":"59096bb7-5757-4196-96a5-f14e967998e7","Type":"ContainerStarted","Data":"9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610"} Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.120896 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.121257 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.122309 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.130505 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:14:04 crc kubenswrapper[5106]: I0320 00:14:04.136284 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" podStartSLOduration=4.136271297 podStartE2EDuration="4.136271297s" podCreationTimestamp="2026-03-20 00:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:14:04.134659256 +0000 UTC m=+298.568393330" watchObservedRunningTime="2026-03-20 00:14:04.136271297 +0000 UTC m=+298.570005351" Mar 20 00:14:05 crc kubenswrapper[5106]: I0320 00:14:05.394720 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:05 crc kubenswrapper[5106]: I0320 00:14:05.450052 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmwj9\" (UniqueName: \"kubernetes.io/projected/0e82d111-0784-4d7b-baf1-02bd935d69e6-kube-api-access-lmwj9\") pod \"0e82d111-0784-4d7b-baf1-02bd935d69e6\" (UID: \"0e82d111-0784-4d7b-baf1-02bd935d69e6\") " Mar 20 00:14:05 crc kubenswrapper[5106]: I0320 00:14:05.458488 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e82d111-0784-4d7b-baf1-02bd935d69e6-kube-api-access-lmwj9" (OuterVolumeSpecName: "kube-api-access-lmwj9") pod "0e82d111-0784-4d7b-baf1-02bd935d69e6" (UID: "0e82d111-0784-4d7b-baf1-02bd935d69e6"). InnerVolumeSpecName "kube-api-access-lmwj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:14:05 crc kubenswrapper[5106]: I0320 00:14:05.551317 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmwj9\" (UniqueName: \"kubernetes.io/projected/0e82d111-0784-4d7b-baf1-02bd935d69e6-kube-api-access-lmwj9\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:06 crc kubenswrapper[5106]: I0320 00:14:06.141096 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" Mar 20 00:14:06 crc kubenswrapper[5106]: I0320 00:14:06.141147 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566094-5fcbx" event={"ID":"0e82d111-0784-4d7b-baf1-02bd935d69e6","Type":"ContainerDied","Data":"1afe43865da2b83d7e097d185a6884c2cee1e6a1d67ae28f75707b75501a38c2"} Mar 20 00:14:06 crc kubenswrapper[5106]: I0320 00:14:06.141196 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1afe43865da2b83d7e097d185a6884c2cee1e6a1d67ae28f75707b75501a38c2" Mar 20 00:14:07 crc kubenswrapper[5106]: I0320 00:14:07.411061 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:14:07 crc kubenswrapper[5106]: I0320 00:14:07.429064 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:14:07 crc kubenswrapper[5106]: I0320 00:14:07.478002 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:14:07 crc kubenswrapper[5106]: I0320 00:14:07.489014 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:14:07 crc kubenswrapper[5106]: I0320 00:14:07.760282 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Mar 20 00:14:11 crc kubenswrapper[5106]: I0320 00:14:11.161455 5106 scope.go:117] "RemoveContainer" containerID="2f84acde82334adfac68fc029e1f7d553ee2f22eb4c4c75b93a7090a38aa355c" Mar 20 00:14:11 crc kubenswrapper[5106]: I0320 00:14:11.168044 5106 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 00:14:12 crc kubenswrapper[5106]: I0320 00:14:12.173975 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:14:12 crc kubenswrapper[5106]: I0320 00:14:12.174401 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" event={"ID":"93e57ca7-278b-47c3-a3ae-7c07849de478","Type":"ContainerStarted","Data":"353f39cfdad3d05685571e3097119387800b1c10636ff7228f6907c7afeb2538"} Mar 20 00:14:12 crc kubenswrapper[5106]: I0320 00:14:12.175779 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:14:12 crc kubenswrapper[5106]: I0320 00:14:12.201781 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" podStartSLOduration=102.201760508 podStartE2EDuration="1m42.201760508s" podCreationTimestamp="2026-03-20 00:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:13:26.824759688 +0000 UTC m=+261.258493782" watchObservedRunningTime="2026-03-20 00:14:12.201760508 +0000 UTC m=+306.635494572" Mar 20 00:14:12 crc kubenswrapper[5106]: I0320 00:14:12.316522 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-575dc4b4cf-qlhmn" Mar 20 00:14:20 crc kubenswrapper[5106]: I0320 00:14:20.867209 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb767755d-6pnv8"] Mar 20 00:14:20 crc kubenswrapper[5106]: I0320 00:14:20.868075 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" podUID="7610e477-a93a-43b7-9eb2-8600a48ecac9" containerName="controller-manager" containerID="cri-o://f2d8022090426685795644a9e7412e6ca8ac6a031cfc1658a530f4bded34828a" gracePeriod=30 Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.222266 5106 generic.go:358] "Generic (PLEG): container finished" podID="7610e477-a93a-43b7-9eb2-8600a48ecac9" containerID="f2d8022090426685795644a9e7412e6ca8ac6a031cfc1658a530f4bded34828a" exitCode=0 Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.222417 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" event={"ID":"7610e477-a93a-43b7-9eb2-8600a48ecac9","Type":"ContainerDied","Data":"f2d8022090426685795644a9e7412e6ca8ac6a031cfc1658a530f4bded34828a"} Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.621330 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.647803 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-gb4sq"] Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.648537 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e82d111-0784-4d7b-baf1-02bd935d69e6" containerName="oc" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.648560 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e82d111-0784-4d7b-baf1-02bd935d69e6" containerName="oc" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.648618 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7610e477-a93a-43b7-9eb2-8600a48ecac9" containerName="controller-manager" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.648628 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7610e477-a93a-43b7-9eb2-8600a48ecac9" containerName="controller-manager" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.648763 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e82d111-0784-4d7b-baf1-02bd935d69e6" containerName="oc" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.648778 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="7610e477-a93a-43b7-9eb2-8600a48ecac9" containerName="controller-manager" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.656288 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.662352 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-gb4sq"] Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.760737 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-client-ca\") pod \"7610e477-a93a-43b7-9eb2-8600a48ecac9\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.760806 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-proxy-ca-bundles\") pod \"7610e477-a93a-43b7-9eb2-8600a48ecac9\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.760996 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w96qk\" (UniqueName: \"kubernetes.io/projected/7610e477-a93a-43b7-9eb2-8600a48ecac9-kube-api-access-w96qk\") pod \"7610e477-a93a-43b7-9eb2-8600a48ecac9\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761045 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7610e477-a93a-43b7-9eb2-8600a48ecac9-tmp\") pod \"7610e477-a93a-43b7-9eb2-8600a48ecac9\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761131 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-config\") pod \"7610e477-a93a-43b7-9eb2-8600a48ecac9\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761172 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7610e477-a93a-43b7-9eb2-8600a48ecac9-serving-cert\") pod \"7610e477-a93a-43b7-9eb2-8600a48ecac9\" (UID: \"7610e477-a93a-43b7-9eb2-8600a48ecac9\") " Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761361 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zldxr\" (UniqueName: \"kubernetes.io/projected/84f5d033-9073-49c8-80b0-3457a4a2dc14-kube-api-access-zldxr\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761453 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84f5d033-9073-49c8-80b0-3457a4a2dc14-serving-cert\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761508 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84f5d033-9073-49c8-80b0-3457a4a2dc14-tmp\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761538 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-client-ca\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761602 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-client-ca" (OuterVolumeSpecName: "client-ca") pod "7610e477-a93a-43b7-9eb2-8600a48ecac9" (UID: "7610e477-a93a-43b7-9eb2-8600a48ecac9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761724 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-proxy-ca-bundles\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761777 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-config\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761831 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.761854 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7610e477-a93a-43b7-9eb2-8600a48ecac9-tmp" (OuterVolumeSpecName: "tmp") pod "7610e477-a93a-43b7-9eb2-8600a48ecac9" (UID: "7610e477-a93a-43b7-9eb2-8600a48ecac9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.762103 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-config" (OuterVolumeSpecName: "config") pod "7610e477-a93a-43b7-9eb2-8600a48ecac9" (UID: "7610e477-a93a-43b7-9eb2-8600a48ecac9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.762712 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7610e477-a93a-43b7-9eb2-8600a48ecac9" (UID: "7610e477-a93a-43b7-9eb2-8600a48ecac9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.766730 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7610e477-a93a-43b7-9eb2-8600a48ecac9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7610e477-a93a-43b7-9eb2-8600a48ecac9" (UID: "7610e477-a93a-43b7-9eb2-8600a48ecac9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.766835 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7610e477-a93a-43b7-9eb2-8600a48ecac9-kube-api-access-w96qk" (OuterVolumeSpecName: "kube-api-access-w96qk") pod "7610e477-a93a-43b7-9eb2-8600a48ecac9" (UID: "7610e477-a93a-43b7-9eb2-8600a48ecac9"). InnerVolumeSpecName "kube-api-access-w96qk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.862969 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zldxr\" (UniqueName: \"kubernetes.io/projected/84f5d033-9073-49c8-80b0-3457a4a2dc14-kube-api-access-zldxr\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863031 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84f5d033-9073-49c8-80b0-3457a4a2dc14-serving-cert\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863060 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84f5d033-9073-49c8-80b0-3457a4a2dc14-tmp\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863077 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-client-ca\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863130 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-proxy-ca-bundles\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863157 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-config\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863198 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863210 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7610e477-a93a-43b7-9eb2-8600a48ecac9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863219 5106 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7610e477-a93a-43b7-9eb2-8600a48ecac9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863230 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w96qk\" (UniqueName: \"kubernetes.io/projected/7610e477-a93a-43b7-9eb2-8600a48ecac9-kube-api-access-w96qk\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863237 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7610e477-a93a-43b7-9eb2-8600a48ecac9-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.863848 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/84f5d033-9073-49c8-80b0-3457a4a2dc14-tmp\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.864664 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-proxy-ca-bundles\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.864729 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-client-ca\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.864879 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84f5d033-9073-49c8-80b0-3457a4a2dc14-config\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.867542 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84f5d033-9073-49c8-80b0-3457a4a2dc14-serving-cert\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.879343 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zldxr\" (UniqueName: \"kubernetes.io/projected/84f5d033-9073-49c8-80b0-3457a4a2dc14-kube-api-access-zldxr\") pod \"controller-manager-5c44cc568c-gb4sq\" (UID: \"84f5d033-9073-49c8-80b0-3457a4a2dc14\") " pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:21 crc kubenswrapper[5106]: I0320 00:14:21.982241 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:22 crc kubenswrapper[5106]: I0320 00:14:22.228550 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" Mar 20 00:14:22 crc kubenswrapper[5106]: I0320 00:14:22.228559 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb767755d-6pnv8" event={"ID":"7610e477-a93a-43b7-9eb2-8600a48ecac9","Type":"ContainerDied","Data":"511b872f64ccaec1a4703f07c9357b43d0f1b3a8eb13ac3cde0cac2e844a3eb2"} Mar 20 00:14:22 crc kubenswrapper[5106]: I0320 00:14:22.229148 5106 scope.go:117] "RemoveContainer" containerID="f2d8022090426685795644a9e7412e6ca8ac6a031cfc1658a530f4bded34828a" Mar 20 00:14:22 crc kubenswrapper[5106]: I0320 00:14:22.270141 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb767755d-6pnv8"] Mar 20 00:14:22 crc kubenswrapper[5106]: I0320 00:14:22.275180 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fb767755d-6pnv8"] Mar 20 00:14:22 crc kubenswrapper[5106]: I0320 00:14:22.443117 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c44cc568c-gb4sq"] Mar 20 00:14:23 crc kubenswrapper[5106]: I0320 00:14:23.168147 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7610e477-a93a-43b7-9eb2-8600a48ecac9" path="/var/lib/kubelet/pods/7610e477-a93a-43b7-9eb2-8600a48ecac9/volumes" Mar 20 00:14:23 crc kubenswrapper[5106]: I0320 00:14:23.236041 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" event={"ID":"84f5d033-9073-49c8-80b0-3457a4a2dc14","Type":"ContainerStarted","Data":"a41d058b87398dc09fe549f623721b97625547582943dac1dc2425748dddf826"} Mar 20 00:14:23 crc kubenswrapper[5106]: I0320 00:14:23.236117 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" event={"ID":"84f5d033-9073-49c8-80b0-3457a4a2dc14","Type":"ContainerStarted","Data":"97611c22e73edf913b789ace0d6199ee48a5bafe40829019f50e42044fd8d3a9"} Mar 20 00:14:23 crc kubenswrapper[5106]: I0320 00:14:23.236516 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:23 crc kubenswrapper[5106]: I0320 00:14:23.241799 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" Mar 20 00:14:23 crc kubenswrapper[5106]: I0320 00:14:23.251193 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c44cc568c-gb4sq" podStartSLOduration=3.25117499 podStartE2EDuration="3.25117499s" podCreationTimestamp="2026-03-20 00:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:14:23.250011071 +0000 UTC m=+317.683745125" watchObservedRunningTime="2026-03-20 00:14:23.25117499 +0000 UTC m=+317.684909054" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.140284 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n"] Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.151667 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n"] Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.151810 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.153816 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.154383 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.173388 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcrc\" (UniqueName: \"kubernetes.io/projected/e7dc3353-1ab1-4367-b916-6dea901c85c0-kube-api-access-7wcrc\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.173438 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7dc3353-1ab1-4367-b916-6dea901c85c0-secret-volume\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.173471 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7dc3353-1ab1-4367-b916-6dea901c85c0-config-volume\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.274228 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7dc3353-1ab1-4367-b916-6dea901c85c0-config-volume\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.274315 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wcrc\" (UniqueName: \"kubernetes.io/projected/e7dc3353-1ab1-4367-b916-6dea901c85c0-kube-api-access-7wcrc\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.274356 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7dc3353-1ab1-4367-b916-6dea901c85c0-secret-volume\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.275352 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7dc3353-1ab1-4367-b916-6dea901c85c0-config-volume\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.280275 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7dc3353-1ab1-4367-b916-6dea901c85c0-secret-volume\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.290448 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wcrc\" (UniqueName: \"kubernetes.io/projected/e7dc3353-1ab1-4367-b916-6dea901c85c0-kube-api-access-7wcrc\") pod \"collect-profiles-29566095-vt58n\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.391361 5106 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.480563 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.861243 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj"] Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.861727 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" podUID="14ea786b-10b5-4c0e-8cda-464f2db14788" containerName="route-controller-manager" containerID="cri-o://7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837" gracePeriod=30 Mar 20 00:15:00 crc kubenswrapper[5106]: I0320 00:15:00.939479 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n"] Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.340558 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.363866 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-248gr"] Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.365564 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14ea786b-10b5-4c0e-8cda-464f2db14788" containerName="route-controller-manager" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.365609 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ea786b-10b5-4c0e-8cda-464f2db14788" containerName="route-controller-manager" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.365741 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="14ea786b-10b5-4c0e-8cda-464f2db14788" containerName="route-controller-manager" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.372760 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.377315 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-248gr"] Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.389808 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p62rd\" (UniqueName: \"kubernetes.io/projected/14ea786b-10b5-4c0e-8cda-464f2db14788-kube-api-access-p62rd\") pod \"14ea786b-10b5-4c0e-8cda-464f2db14788\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.389911 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-client-ca\") pod \"14ea786b-10b5-4c0e-8cda-464f2db14788\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.390643 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-client-ca" (OuterVolumeSpecName: "client-ca") pod "14ea786b-10b5-4c0e-8cda-464f2db14788" (UID: "14ea786b-10b5-4c0e-8cda-464f2db14788"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.390687 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ea786b-10b5-4c0e-8cda-464f2db14788-serving-cert\") pod \"14ea786b-10b5-4c0e-8cda-464f2db14788\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.390766 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14ea786b-10b5-4c0e-8cda-464f2db14788-tmp\") pod \"14ea786b-10b5-4c0e-8cda-464f2db14788\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.390795 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-config\") pod \"14ea786b-10b5-4c0e-8cda-464f2db14788\" (UID: \"14ea786b-10b5-4c0e-8cda-464f2db14788\") " Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.390970 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23761f19-0c0d-4e8d-8acb-3d03ef48166d-tmp\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391001 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdh7m\" (UniqueName: \"kubernetes.io/projected/23761f19-0c0d-4e8d-8acb-3d03ef48166d-kube-api-access-vdh7m\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391026 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23761f19-0c0d-4e8d-8acb-3d03ef48166d-config\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391078 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23761f19-0c0d-4e8d-8acb-3d03ef48166d-client-ca\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391105 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23761f19-0c0d-4e8d-8acb-3d03ef48166d-serving-cert\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391137 5106 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391202 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14ea786b-10b5-4c0e-8cda-464f2db14788-tmp" (OuterVolumeSpecName: "tmp") pod "14ea786b-10b5-4c0e-8cda-464f2db14788" (UID: "14ea786b-10b5-4c0e-8cda-464f2db14788"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.391495 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-config" (OuterVolumeSpecName: "config") pod "14ea786b-10b5-4c0e-8cda-464f2db14788" (UID: "14ea786b-10b5-4c0e-8cda-464f2db14788"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.395963 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14ea786b-10b5-4c0e-8cda-464f2db14788-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14ea786b-10b5-4c0e-8cda-464f2db14788" (UID: "14ea786b-10b5-4c0e-8cda-464f2db14788"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.396240 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ea786b-10b5-4c0e-8cda-464f2db14788-kube-api-access-p62rd" (OuterVolumeSpecName: "kube-api-access-p62rd") pod "14ea786b-10b5-4c0e-8cda-464f2db14788" (UID: "14ea786b-10b5-4c0e-8cda-464f2db14788"). InnerVolumeSpecName "kube-api-access-p62rd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.481833 5106 generic.go:358] "Generic (PLEG): container finished" podID="14ea786b-10b5-4c0e-8cda-464f2db14788" containerID="7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837" exitCode=0 Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.481884 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" event={"ID":"14ea786b-10b5-4c0e-8cda-464f2db14788","Type":"ContainerDied","Data":"7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837"} Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.481907 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.482260 5106 scope.go:117] "RemoveContainer" containerID="7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.482247 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj" event={"ID":"14ea786b-10b5-4c0e-8cda-464f2db14788","Type":"ContainerDied","Data":"6bbc2cd5863932c61946a7636bfe9d5e875440dfe8b6298937cadfa7a78063d3"} Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.483995 5106 generic.go:358] "Generic (PLEG): container finished" podID="e7dc3353-1ab1-4367-b916-6dea901c85c0" containerID="9536f7d116d62825ad3260aee53f5915aa05ac06d85888e03642f51dd0d0b6ad" exitCode=0 Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.484070 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" event={"ID":"e7dc3353-1ab1-4367-b916-6dea901c85c0","Type":"ContainerDied","Data":"9536f7d116d62825ad3260aee53f5915aa05ac06d85888e03642f51dd0d0b6ad"} Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.484104 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" event={"ID":"e7dc3353-1ab1-4367-b916-6dea901c85c0","Type":"ContainerStarted","Data":"dd413706fa6bf95adfd4dd6b677a229f915aa3784154eb63baa13308c95752e9"} Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492088 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23761f19-0c0d-4e8d-8acb-3d03ef48166d-tmp\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492130 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vdh7m\" (UniqueName: \"kubernetes.io/projected/23761f19-0c0d-4e8d-8acb-3d03ef48166d-kube-api-access-vdh7m\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492155 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23761f19-0c0d-4e8d-8acb-3d03ef48166d-config\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492195 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23761f19-0c0d-4e8d-8acb-3d03ef48166d-client-ca\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492220 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23761f19-0c0d-4e8d-8acb-3d03ef48166d-serving-cert\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492288 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p62rd\" (UniqueName: \"kubernetes.io/projected/14ea786b-10b5-4c0e-8cda-464f2db14788-kube-api-access-p62rd\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492320 5106 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14ea786b-10b5-4c0e-8cda-464f2db14788-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492336 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/14ea786b-10b5-4c0e-8cda-464f2db14788-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.492351 5106 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ea786b-10b5-4c0e-8cda-464f2db14788-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.493346 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23761f19-0c0d-4e8d-8acb-3d03ef48166d-client-ca\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.495522 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/23761f19-0c0d-4e8d-8acb-3d03ef48166d-tmp\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.495976 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23761f19-0c0d-4e8d-8acb-3d03ef48166d-config\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.498611 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23761f19-0c0d-4e8d-8acb-3d03ef48166d-serving-cert\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.503293 5106 scope.go:117] "RemoveContainer" containerID="7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837" Mar 20 00:15:01 crc kubenswrapper[5106]: E0320 00:15:01.503704 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837\": container with ID starting with 7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837 not found: ID does not exist" containerID="7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.503825 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837"} err="failed to get container status \"7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837\": rpc error: code = NotFound desc = could not find container \"7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837\": container with ID starting with 7f45cb16bb362e3ea8ade6b15c8fa1e16b1351508fecc239695b51a7ed8e4837 not found: ID does not exist" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.509222 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdh7m\" (UniqueName: \"kubernetes.io/projected/23761f19-0c0d-4e8d-8acb-3d03ef48166d-kube-api-access-vdh7m\") pod \"route-controller-manager-754795cf47-248gr\" (UID: \"23761f19-0c0d-4e8d-8acb-3d03ef48166d\") " pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.545496 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj"] Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.551175 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84bcc8595b-cfjhj"] Mar 20 00:15:01 crc kubenswrapper[5106]: I0320 00:15:01.691879 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.104699 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-754795cf47-248gr"] Mar 20 00:15:02 crc kubenswrapper[5106]: W0320 00:15:02.114881 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23761f19_0c0d_4e8d_8acb_3d03ef48166d.slice/crio-1af10b46e5126bcd54b7f0e2a554e2b9e9795f74893804da0447453b0cdeb2b2 WatchSource:0}: Error finding container 1af10b46e5126bcd54b7f0e2a554e2b9e9795f74893804da0447453b0cdeb2b2: Status 404 returned error can't find the container with id 1af10b46e5126bcd54b7f0e2a554e2b9e9795f74893804da0447453b0cdeb2b2 Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.492323 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" event={"ID":"23761f19-0c0d-4e8d-8acb-3d03ef48166d","Type":"ContainerStarted","Data":"cab7f687f3faa63bb38deef4f2cbd900d710213b358ec7f946f6d5089f2148a9"} Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.492883 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" event={"ID":"23761f19-0c0d-4e8d-8acb-3d03ef48166d","Type":"ContainerStarted","Data":"1af10b46e5126bcd54b7f0e2a554e2b9e9795f74893804da0447453b0cdeb2b2"} Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.493516 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.519535 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" podStartSLOduration=2.519473986 podStartE2EDuration="2.519473986s" podCreationTimestamp="2026-03-20 00:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:15:02.510818057 +0000 UTC m=+356.944552151" watchObservedRunningTime="2026-03-20 00:15:02.519473986 +0000 UTC m=+356.953208070" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.767809 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-754795cf47-248gr" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.793709 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.808789 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7dc3353-1ab1-4367-b916-6dea901c85c0-secret-volume\") pod \"e7dc3353-1ab1-4367-b916-6dea901c85c0\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.809192 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wcrc\" (UniqueName: \"kubernetes.io/projected/e7dc3353-1ab1-4367-b916-6dea901c85c0-kube-api-access-7wcrc\") pod \"e7dc3353-1ab1-4367-b916-6dea901c85c0\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.810110 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7dc3353-1ab1-4367-b916-6dea901c85c0-config-volume\") pod \"e7dc3353-1ab1-4367-b916-6dea901c85c0\" (UID: \"e7dc3353-1ab1-4367-b916-6dea901c85c0\") " Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.810561 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7dc3353-1ab1-4367-b916-6dea901c85c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7dc3353-1ab1-4367-b916-6dea901c85c0" (UID: "e7dc3353-1ab1-4367-b916-6dea901c85c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.811316 5106 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7dc3353-1ab1-4367-b916-6dea901c85c0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.815066 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7dc3353-1ab1-4367-b916-6dea901c85c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e7dc3353-1ab1-4367-b916-6dea901c85c0" (UID: "e7dc3353-1ab1-4367-b916-6dea901c85c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.816531 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7dc3353-1ab1-4367-b916-6dea901c85c0-kube-api-access-7wcrc" (OuterVolumeSpecName: "kube-api-access-7wcrc") pod "e7dc3353-1ab1-4367-b916-6dea901c85c0" (UID: "e7dc3353-1ab1-4367-b916-6dea901c85c0"). InnerVolumeSpecName "kube-api-access-7wcrc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.912321 5106 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7dc3353-1ab1-4367-b916-6dea901c85c0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:02 crc kubenswrapper[5106]: I0320 00:15:02.912355 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7wcrc\" (UniqueName: \"kubernetes.io/projected/e7dc3353-1ab1-4367-b916-6dea901c85c0-kube-api-access-7wcrc\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:03 crc kubenswrapper[5106]: I0320 00:15:03.167068 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ea786b-10b5-4c0e-8cda-464f2db14788" path="/var/lib/kubelet/pods/14ea786b-10b5-4c0e-8cda-464f2db14788/volumes" Mar 20 00:15:03 crc kubenswrapper[5106]: I0320 00:15:03.502479 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" event={"ID":"e7dc3353-1ab1-4367-b916-6dea901c85c0","Type":"ContainerDied","Data":"dd413706fa6bf95adfd4dd6b677a229f915aa3784154eb63baa13308c95752e9"} Mar 20 00:15:03 crc kubenswrapper[5106]: I0320 00:15:03.502528 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd413706fa6bf95adfd4dd6b677a229f915aa3784154eb63baa13308c95752e9" Mar 20 00:15:03 crc kubenswrapper[5106]: I0320 00:15:03.502762 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566095-vt58n" Mar 20 00:15:16 crc kubenswrapper[5106]: I0320 00:15:16.993543 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bpzzz"] Mar 20 00:15:16 crc kubenswrapper[5106]: I0320 00:15:16.994805 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bpzzz" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="registry-server" containerID="cri-o://e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818" gracePeriod=30 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.003768 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtqct"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.004244 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qtqct" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="registry-server" containerID="cri-o://21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec" gracePeriod=30 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.007773 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xfn66"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.008056 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" containerID="cri-o://9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610" gracePeriod=30 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.017821 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7cgp"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.018357 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c7cgp" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="registry-server" containerID="cri-o://01731054a441977aa29c5c757e1e4d2fca5b5d800e2f51b00cf428500fb2a145" gracePeriod=30 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.033568 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nv56p"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.034110 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nv56p" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="registry-server" containerID="cri-o://439b1f9a519fd91ae3c6376b61333f9a6a63c2c47246c6af7030d8f416aa0842" gracePeriod=30 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.039140 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ltdql"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.040000 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7dc3353-1ab1-4367-b916-6dea901c85c0" containerName="collect-profiles" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.040013 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7dc3353-1ab1-4367-b916-6dea901c85c0" containerName="collect-profiles" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.040155 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7dc3353-1ab1-4367-b916-6dea901c85c0" containerName="collect-profiles" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.050795 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ltdql"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.050931 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.225468 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449m4\" (UniqueName: \"kubernetes.io/projected/37e54f88-deec-4246-981e-cae42f1f759f-kube-api-access-449m4\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.225792 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37e54f88-deec-4246-981e-cae42f1f759f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.225850 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e54f88-deec-4246-981e-cae42f1f759f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.225878 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e54f88-deec-4246-981e-cae42f1f759f-tmp\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.327335 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-449m4\" (UniqueName: \"kubernetes.io/projected/37e54f88-deec-4246-981e-cae42f1f759f-kube-api-access-449m4\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.327380 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37e54f88-deec-4246-981e-cae42f1f759f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.327433 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e54f88-deec-4246-981e-cae42f1f759f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.327464 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e54f88-deec-4246-981e-cae42f1f759f-tmp\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.328075 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37e54f88-deec-4246-981e-cae42f1f759f-tmp\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.328946 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37e54f88-deec-4246-981e-cae42f1f759f-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.341436 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37e54f88-deec-4246-981e-cae42f1f759f-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.354562 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-449m4\" (UniqueName: \"kubernetes.io/projected/37e54f88-deec-4246-981e-cae42f1f759f-kube-api-access-449m4\") pod \"marketplace-operator-547dbd544d-ltdql\" (UID: \"37e54f88-deec-4246-981e-cae42f1f759f\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.408253 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.412956 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.483914 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.498681 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.530619 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-trusted-ca\") pod \"59096bb7-5757-4196-96a5-f14e967998e7\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.531203 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrdmn\" (UniqueName: \"kubernetes.io/projected/59096bb7-5757-4196-96a5-f14e967998e7-kube-api-access-lrdmn\") pod \"59096bb7-5757-4196-96a5-f14e967998e7\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.531752 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59096bb7-5757-4196-96a5-f14e967998e7-tmp\") pod \"59096bb7-5757-4196-96a5-f14e967998e7\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.531879 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-operator-metrics\") pod \"59096bb7-5757-4196-96a5-f14e967998e7\" (UID: \"59096bb7-5757-4196-96a5-f14e967998e7\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.532191 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59096bb7-5757-4196-96a5-f14e967998e7-tmp" (OuterVolumeSpecName: "tmp") pod "59096bb7-5757-4196-96a5-f14e967998e7" (UID: "59096bb7-5757-4196-96a5-f14e967998e7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.534309 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "59096bb7-5757-4196-96a5-f14e967998e7" (UID: "59096bb7-5757-4196-96a5-f14e967998e7"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.536760 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "59096bb7-5757-4196-96a5-f14e967998e7" (UID: "59096bb7-5757-4196-96a5-f14e967998e7"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.555972 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59096bb7-5757-4196-96a5-f14e967998e7-kube-api-access-lrdmn" (OuterVolumeSpecName: "kube-api-access-lrdmn") pod "59096bb7-5757-4196-96a5-f14e967998e7" (UID: "59096bb7-5757-4196-96a5-f14e967998e7"). InnerVolumeSpecName "kube-api-access-lrdmn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.597000 5106 generic.go:358] "Generic (PLEG): container finished" podID="2902d42b-f752-4b77-9aef-994def9350ba" containerID="21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec" exitCode=0 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.597046 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerDied","Data":"21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.597113 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtqct" event={"ID":"2902d42b-f752-4b77-9aef-994def9350ba","Type":"ContainerDied","Data":"7e78a7183785c8e8270176a866bea0c6cff0c6280b8632ee66c40fec7618e129"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.597124 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtqct" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.597133 5106 scope.go:117] "RemoveContainer" containerID="21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.609642 5106 generic.go:358] "Generic (PLEG): container finished" podID="862d0f24-7d93-4dd5-a664-398213a26a24" containerID="439b1f9a519fd91ae3c6376b61333f9a6a63c2c47246c6af7030d8f416aa0842" exitCode=0 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.609817 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerDied","Data":"439b1f9a519fd91ae3c6376b61333f9a6a63c2c47246c6af7030d8f416aa0842"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.615076 5106 generic.go:358] "Generic (PLEG): container finished" podID="59096bb7-5757-4196-96a5-f14e967998e7" containerID="9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610" exitCode=0 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.615200 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.615198 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" event={"ID":"59096bb7-5757-4196-96a5-f14e967998e7","Type":"ContainerDied","Data":"9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.615448 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-xfn66" event={"ID":"59096bb7-5757-4196-96a5-f14e967998e7","Type":"ContainerDied","Data":"1a7702b74517303943adabb2ff5993398f40d095b0d97c5bff706c52e8c2477d"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.618969 5106 generic.go:358] "Generic (PLEG): container finished" podID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerID="01731054a441977aa29c5c757e1e4d2fca5b5d800e2f51b00cf428500fb2a145" exitCode=0 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.619065 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7cgp" event={"ID":"55a6a924-c50c-40e9-bce1-a4a8a636c5e4","Type":"ContainerDied","Data":"01731054a441977aa29c5c757e1e4d2fca5b5d800e2f51b00cf428500fb2a145"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.622184 5106 generic.go:358] "Generic (PLEG): container finished" podID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerID="e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818" exitCode=0 Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.622305 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerDied","Data":"e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.622331 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bpzzz" event={"ID":"87f9f10a-e8ec-450d-b0a6-ea285c273dc4","Type":"ContainerDied","Data":"d800a9cf0ee82125d37b382f8ec833454e5c8854f9fd2aa9778d6c772251fd40"} Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.622427 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bpzzz" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.633415 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-utilities\") pod \"2902d42b-f752-4b77-9aef-994def9350ba\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.633460 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-utilities\") pod \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.633523 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-796wq\" (UniqueName: \"kubernetes.io/projected/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-kube-api-access-796wq\") pod \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.633611 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-catalog-content\") pod \"2902d42b-f752-4b77-9aef-994def9350ba\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.633646 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpgvz\" (UniqueName: \"kubernetes.io/projected/2902d42b-f752-4b77-9aef-994def9350ba-kube-api-access-mpgvz\") pod \"2902d42b-f752-4b77-9aef-994def9350ba\" (UID: \"2902d42b-f752-4b77-9aef-994def9350ba\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.633789 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-catalog-content\") pod \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\" (UID: \"87f9f10a-e8ec-450d-b0a6-ea285c273dc4\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.634076 5106 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.634099 5106 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59096bb7-5757-4196-96a5-f14e967998e7-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.634133 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lrdmn\" (UniqueName: \"kubernetes.io/projected/59096bb7-5757-4196-96a5-f14e967998e7-kube-api-access-lrdmn\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.634148 5106 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/59096bb7-5757-4196-96a5-f14e967998e7-tmp\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.634620 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-utilities" (OuterVolumeSpecName: "utilities") pod "87f9f10a-e8ec-450d-b0a6-ea285c273dc4" (UID: "87f9f10a-e8ec-450d-b0a6-ea285c273dc4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.634786 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-utilities" (OuterVolumeSpecName: "utilities") pod "2902d42b-f752-4b77-9aef-994def9350ba" (UID: "2902d42b-f752-4b77-9aef-994def9350ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.638177 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-kube-api-access-796wq" (OuterVolumeSpecName: "kube-api-access-796wq") pod "87f9f10a-e8ec-450d-b0a6-ea285c273dc4" (UID: "87f9f10a-e8ec-450d-b0a6-ea285c273dc4"). InnerVolumeSpecName "kube-api-access-796wq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.643114 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902d42b-f752-4b77-9aef-994def9350ba-kube-api-access-mpgvz" (OuterVolumeSpecName: "kube-api-access-mpgvz") pod "2902d42b-f752-4b77-9aef-994def9350ba" (UID: "2902d42b-f752-4b77-9aef-994def9350ba"). InnerVolumeSpecName "kube-api-access-mpgvz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.649798 5106 scope.go:117] "RemoveContainer" containerID="9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.652447 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xfn66"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.656331 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-xfn66"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.678088 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.684814 5106 scope.go:117] "RemoveContainer" containerID="5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.685158 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.703832 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2902d42b-f752-4b77-9aef-994def9350ba" (UID: "2902d42b-f752-4b77-9aef-994def9350ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.708267 5106 scope.go:117] "RemoveContainer" containerID="21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.708789 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec\": container with ID starting with 21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec not found: ID does not exist" containerID="21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.708812 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec"} err="failed to get container status \"21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec\": rpc error: code = NotFound desc = could not find container \"21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec\": container with ID starting with 21993beea9e42bd5c23ffcf033e0ed2814b4fec0bb397cb7f534a99637317eec not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.708828 5106 scope.go:117] "RemoveContainer" containerID="9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.709112 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d\": container with ID starting with 9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d not found: ID does not exist" containerID="9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.709127 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d"} err="failed to get container status \"9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d\": rpc error: code = NotFound desc = could not find container \"9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d\": container with ID starting with 9bd1a3a8c5c13c4ae5a858a57b40ee55b4e5bcbc2aff8c3b437f6c6f8a415b8d not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.709196 5106 scope.go:117] "RemoveContainer" containerID="5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.709513 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea\": container with ID starting with 5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea not found: ID does not exist" containerID="5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.709536 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea"} err="failed to get container status \"5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea\": rpc error: code = NotFound desc = could not find container \"5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea\": container with ID starting with 5927fbee8ea9e2237728b85dfdb1ff1f5f0d444d76607a5eb8807ac99dac73ea not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.709556 5106 scope.go:117] "RemoveContainer" containerID="9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.711155 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f9f10a-e8ec-450d-b0a6-ea285c273dc4" (UID: "87f9f10a-e8ec-450d-b0a6-ea285c273dc4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.732557 5106 scope.go:117] "RemoveContainer" containerID="09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.734785 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.734816 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.734825 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.734834 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-796wq\" (UniqueName: \"kubernetes.io/projected/87f9f10a-e8ec-450d-b0a6-ea285c273dc4-kube-api-access-796wq\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.734843 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2902d42b-f752-4b77-9aef-994def9350ba-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.734851 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpgvz\" (UniqueName: \"kubernetes.io/projected/2902d42b-f752-4b77-9aef-994def9350ba-kube-api-access-mpgvz\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.745758 5106 scope.go:117] "RemoveContainer" containerID="9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.746288 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610\": container with ID starting with 9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610 not found: ID does not exist" containerID="9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.746323 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610"} err="failed to get container status \"9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610\": rpc error: code = NotFound desc = could not find container \"9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610\": container with ID starting with 9496e4bc8643e33a4a7cd9bcbb93e36c1bd0141ef265af2f7ee9631a4c736610 not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.746350 5106 scope.go:117] "RemoveContainer" containerID="09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.746632 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639\": container with ID starting with 09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639 not found: ID does not exist" containerID="09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.746665 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639"} err="failed to get container status \"09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639\": rpc error: code = NotFound desc = could not find container \"09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639\": container with ID starting with 09995baf980859a7d2ed42f91b2cd769f870108d9650c84bfaf6aceedf897639 not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.746684 5106 scope.go:117] "RemoveContainer" containerID="e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.759245 5106 scope.go:117] "RemoveContainer" containerID="9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.775054 5106 scope.go:117] "RemoveContainer" containerID="60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.789393 5106 scope.go:117] "RemoveContainer" containerID="e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.789804 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818\": container with ID starting with e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818 not found: ID does not exist" containerID="e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.789848 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818"} err="failed to get container status \"e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818\": rpc error: code = NotFound desc = could not find container \"e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818\": container with ID starting with e13dcb8ba2b7384705ea15ebbe798ed5ecff796937f08cccd03195a8d1897818 not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.789874 5106 scope.go:117] "RemoveContainer" containerID="9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.790210 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898\": container with ID starting with 9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898 not found: ID does not exist" containerID="9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.790249 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898"} err="failed to get container status \"9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898\": rpc error: code = NotFound desc = could not find container \"9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898\": container with ID starting with 9ec75929d505629495ba9bfecb2e0c8c799062de039671f12024b4b0c809f898 not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.790276 5106 scope.go:117] "RemoveContainer" containerID="60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254" Mar 20 00:15:17 crc kubenswrapper[5106]: E0320 00:15:17.790677 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254\": container with ID starting with 60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254 not found: ID does not exist" containerID="60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.790710 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254"} err="failed to get container status \"60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254\": rpc error: code = NotFound desc = could not find container \"60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254\": container with ID starting with 60b95042fea4fae8470fa8078b9ea5a148251293d4def512099e5c097bbb3254 not found: ID does not exist" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.835800 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7hsm\" (UniqueName: \"kubernetes.io/projected/862d0f24-7d93-4dd5-a664-398213a26a24-kube-api-access-g7hsm\") pod \"862d0f24-7d93-4dd5-a664-398213a26a24\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.835878 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-utilities\") pod \"862d0f24-7d93-4dd5-a664-398213a26a24\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.835946 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-utilities\") pod \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.835997 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwm6t\" (UniqueName: \"kubernetes.io/projected/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-kube-api-access-gwm6t\") pod \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.836051 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-catalog-content\") pod \"862d0f24-7d93-4dd5-a664-398213a26a24\" (UID: \"862d0f24-7d93-4dd5-a664-398213a26a24\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.836084 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-catalog-content\") pod \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\" (UID: \"55a6a924-c50c-40e9-bce1-a4a8a636c5e4\") " Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.837070 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-utilities" (OuterVolumeSpecName: "utilities") pod "862d0f24-7d93-4dd5-a664-398213a26a24" (UID: "862d0f24-7d93-4dd5-a664-398213a26a24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.837647 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-utilities" (OuterVolumeSpecName: "utilities") pod "55a6a924-c50c-40e9-bce1-a4a8a636c5e4" (UID: "55a6a924-c50c-40e9-bce1-a4a8a636c5e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.839212 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-kube-api-access-gwm6t" (OuterVolumeSpecName: "kube-api-access-gwm6t") pod "55a6a924-c50c-40e9-bce1-a4a8a636c5e4" (UID: "55a6a924-c50c-40e9-bce1-a4a8a636c5e4"). InnerVolumeSpecName "kube-api-access-gwm6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.841463 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862d0f24-7d93-4dd5-a664-398213a26a24-kube-api-access-g7hsm" (OuterVolumeSpecName: "kube-api-access-g7hsm") pod "862d0f24-7d93-4dd5-a664-398213a26a24" (UID: "862d0f24-7d93-4dd5-a664-398213a26a24"). InnerVolumeSpecName "kube-api-access-g7hsm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.854125 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55a6a924-c50c-40e9-bce1-a4a8a636c5e4" (UID: "55a6a924-c50c-40e9-bce1-a4a8a636c5e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.857114 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ltdql"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.937442 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtqct"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.937491 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g7hsm\" (UniqueName: \"kubernetes.io/projected/862d0f24-7d93-4dd5-a664-398213a26a24-kube-api-access-g7hsm\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.937523 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.937538 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.937549 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gwm6t\" (UniqueName: \"kubernetes.io/projected/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-kube-api-access-gwm6t\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.937560 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55a6a924-c50c-40e9-bce1-a4a8a636c5e4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.940731 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qtqct"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.962964 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "862d0f24-7d93-4dd5-a664-398213a26a24" (UID: "862d0f24-7d93-4dd5-a664-398213a26a24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.975336 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bpzzz"] Mar 20 00:15:17 crc kubenswrapper[5106]: I0320 00:15:17.986789 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bpzzz"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.038728 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862d0f24-7d93-4dd5-a664-398213a26a24-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.629909 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c7cgp" event={"ID":"55a6a924-c50c-40e9-bce1-a4a8a636c5e4","Type":"ContainerDied","Data":"8cf65a141ce68f290067ef5012cc7c3c95ac1a7df52d5b8baa642c25c4f1d171"} Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.630279 5106 scope.go:117] "RemoveContainer" containerID="01731054a441977aa29c5c757e1e4d2fca5b5d800e2f51b00cf428500fb2a145" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.630029 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c7cgp" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.636766 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nv56p" event={"ID":"862d0f24-7d93-4dd5-a664-398213a26a24","Type":"ContainerDied","Data":"b1157e0dcc789467bde3e43a0dadbc7ca284a05f501158cfe68d2c02904ac431"} Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.636802 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nv56p" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.639825 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" event={"ID":"37e54f88-deec-4246-981e-cae42f1f759f","Type":"ContainerStarted","Data":"06b8a95442c470448792deac4915cb3df48de5999cf324e80c3cb49320997c74"} Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.639868 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" event={"ID":"37e54f88-deec-4246-981e-cae42f1f759f","Type":"ContainerStarted","Data":"4b7f91a3ee0a1a3e14c65f1b1b40b4172e033685b5725435323b8bb94906e14d"} Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.640000 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.645878 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.663025 5106 scope.go:117] "RemoveContainer" containerID="cd2c49a06d7db7b3180a483e2cd8c0785966df52f08ced095e84714b5c6239a9" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.666069 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-ltdql" podStartSLOduration=1.666050394 podStartE2EDuration="1.666050394s" podCreationTimestamp="2026-03-20 00:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:15:18.655738023 +0000 UTC m=+373.089472077" watchObservedRunningTime="2026-03-20 00:15:18.666050394 +0000 UTC m=+373.099784438" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.678189 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nv56p"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.679794 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nv56p"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.686962 5106 scope.go:117] "RemoveContainer" containerID="d59737e0972e778adeb492427f2977907638bd92891b8cb10cea9d1fa8f483f5" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.718493 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7cgp"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.719740 5106 scope.go:117] "RemoveContainer" containerID="439b1f9a519fd91ae3c6376b61333f9a6a63c2c47246c6af7030d8f416aa0842" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.723835 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c7cgp"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.734362 5106 scope.go:117] "RemoveContainer" containerID="27732b6ace7382979c7097798c075f006cc505832366461f4ed51a505bac19ea" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.753857 5106 scope.go:117] "RemoveContainer" containerID="b704909afe5cb44892692a2875bebe422d5f85b1adc3ff025a106f0f85e325b0" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.811429 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s25nb"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.811983 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812025 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812034 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812040 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812048 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812055 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812071 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812076 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812084 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812089 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812097 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812104 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812110 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812116 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812127 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812132 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812140 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812145 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="extract-content" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812152 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812159 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812167 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812173 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812183 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812188 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812195 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812201 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="extract-utilities" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812209 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812214 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812299 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="2902d42b-f752-4b77-9aef-994def9350ba" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812311 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812319 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812328 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" containerName="registry-server" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812337 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.812347 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="59096bb7-5757-4196-96a5-f14e967998e7" containerName="marketplace-operator" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.817892 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s25nb"] Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.817997 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.828069 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.949512 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfbf52b-0060-44e8-9485-e5c04de2ad60-catalog-content\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.949561 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfbf52b-0060-44e8-9485-e5c04de2ad60-utilities\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:18 crc kubenswrapper[5106]: I0320 00:15:18.949686 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcfgt\" (UniqueName: \"kubernetes.io/projected/4cfbf52b-0060-44e8-9485-e5c04de2ad60-kube-api-access-bcfgt\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.051413 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfbf52b-0060-44e8-9485-e5c04de2ad60-catalog-content\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.051504 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfbf52b-0060-44e8-9485-e5c04de2ad60-utilities\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.051612 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcfgt\" (UniqueName: \"kubernetes.io/projected/4cfbf52b-0060-44e8-9485-e5c04de2ad60-kube-api-access-bcfgt\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.051983 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfbf52b-0060-44e8-9485-e5c04de2ad60-catalog-content\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.052533 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfbf52b-0060-44e8-9485-e5c04de2ad60-utilities\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.072435 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcfgt\" (UniqueName: \"kubernetes.io/projected/4cfbf52b-0060-44e8-9485-e5c04de2ad60-kube-api-access-bcfgt\") pod \"certified-operators-s25nb\" (UID: \"4cfbf52b-0060-44e8-9485-e5c04de2ad60\") " pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.142903 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.169121 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2902d42b-f752-4b77-9aef-994def9350ba" path="/var/lib/kubelet/pods/2902d42b-f752-4b77-9aef-994def9350ba/volumes" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.171278 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a6a924-c50c-40e9-bce1-a4a8a636c5e4" path="/var/lib/kubelet/pods/55a6a924-c50c-40e9-bce1-a4a8a636c5e4/volumes" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.173804 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59096bb7-5757-4196-96a5-f14e967998e7" path="/var/lib/kubelet/pods/59096bb7-5757-4196-96a5-f14e967998e7/volumes" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.175401 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862d0f24-7d93-4dd5-a664-398213a26a24" path="/var/lib/kubelet/pods/862d0f24-7d93-4dd5-a664-398213a26a24/volumes" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.176490 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f9f10a-e8ec-450d-b0a6-ea285c273dc4" path="/var/lib/kubelet/pods/87f9f10a-e8ec-450d-b0a6-ea285c273dc4/volumes" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.558159 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s25nb"] Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.649846 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25nb" event={"ID":"4cfbf52b-0060-44e8-9485-e5c04de2ad60","Type":"ContainerStarted","Data":"ca5f9fa8113c5bd77d31933d0c9f9f7a89f40adfdb11d11494da745584732aae"} Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.800388 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjqw"] Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.806146 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.811013 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.811807 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjqw"] Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.863952 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-catalog-content\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.864011 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwjx\" (UniqueName: \"kubernetes.io/projected/e7305fef-edd6-41c7-8db6-33177da2a53c-kube-api-access-2hwjx\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.864101 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-utilities\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.964834 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwjx\" (UniqueName: \"kubernetes.io/projected/e7305fef-edd6-41c7-8db6-33177da2a53c-kube-api-access-2hwjx\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.964906 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-utilities\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.964990 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-catalog-content\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.965478 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-catalog-content\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.965626 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-utilities\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:19 crc kubenswrapper[5106]: I0320 00:15:19.983178 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwjx\" (UniqueName: \"kubernetes.io/projected/e7305fef-edd6-41c7-8db6-33177da2a53c-kube-api-access-2hwjx\") pod \"redhat-marketplace-bjjqw\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:20 crc kubenswrapper[5106]: I0320 00:15:20.133427 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:20 crc kubenswrapper[5106]: I0320 00:15:20.514446 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjqw"] Mar 20 00:15:20 crc kubenswrapper[5106]: W0320 00:15:20.519447 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7305fef_edd6_41c7_8db6_33177da2a53c.slice/crio-a095fb49543fea9a70188b968a0c9ecc48363754e53bf6f408d64f402f0f2981 WatchSource:0}: Error finding container a095fb49543fea9a70188b968a0c9ecc48363754e53bf6f408d64f402f0f2981: Status 404 returned error can't find the container with id a095fb49543fea9a70188b968a0c9ecc48363754e53bf6f408d64f402f0f2981 Mar 20 00:15:20 crc kubenswrapper[5106]: I0320 00:15:20.658196 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerStarted","Data":"499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf"} Mar 20 00:15:20 crc kubenswrapper[5106]: I0320 00:15:20.658261 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerStarted","Data":"a095fb49543fea9a70188b968a0c9ecc48363754e53bf6f408d64f402f0f2981"} Mar 20 00:15:20 crc kubenswrapper[5106]: I0320 00:15:20.660065 5106 generic.go:358] "Generic (PLEG): container finished" podID="4cfbf52b-0060-44e8-9485-e5c04de2ad60" containerID="261b8b2f424ac6f2c3f0bd9589b440d751cf2b7398ba3bbca0b36287f86c896b" exitCode=0 Mar 20 00:15:20 crc kubenswrapper[5106]: I0320 00:15:20.660119 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25nb" event={"ID":"4cfbf52b-0060-44e8-9485-e5c04de2ad60","Type":"ContainerDied","Data":"261b8b2f424ac6f2c3f0bd9589b440d751cf2b7398ba3bbca0b36287f86c896b"} Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.199352 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6gvc7"] Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.204120 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.208148 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.213522 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gvc7"] Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.276407 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd7758c-5444-4836-adae-5613bdf96c2f-catalog-content\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.276552 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvwkg\" (UniqueName: \"kubernetes.io/projected/cdd7758c-5444-4836-adae-5613bdf96c2f-kube-api-access-lvwkg\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.276699 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd7758c-5444-4836-adae-5613bdf96c2f-utilities\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.377282 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd7758c-5444-4836-adae-5613bdf96c2f-catalog-content\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.377346 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvwkg\" (UniqueName: \"kubernetes.io/projected/cdd7758c-5444-4836-adae-5613bdf96c2f-kube-api-access-lvwkg\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.377389 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd7758c-5444-4836-adae-5613bdf96c2f-utilities\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.377895 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdd7758c-5444-4836-adae-5613bdf96c2f-catalog-content\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.377907 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdd7758c-5444-4836-adae-5613bdf96c2f-utilities\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.401335 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvwkg\" (UniqueName: \"kubernetes.io/projected/cdd7758c-5444-4836-adae-5613bdf96c2f-kube-api-access-lvwkg\") pod \"redhat-operators-6gvc7\" (UID: \"cdd7758c-5444-4836-adae-5613bdf96c2f\") " pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.524098 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.670725 5106 generic.go:358] "Generic (PLEG): container finished" podID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerID="499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf" exitCode=0 Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.671015 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerDied","Data":"499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf"} Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.673662 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25nb" event={"ID":"4cfbf52b-0060-44e8-9485-e5c04de2ad60","Type":"ContainerStarted","Data":"03f8bd3aba70a82b31cf3041afbfad1bb456f9ccf4b6bc45e3d9f0ad3f0c7f12"} Mar 20 00:15:21 crc kubenswrapper[5106]: I0320 00:15:21.998501 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gvc7"] Mar 20 00:15:22 crc kubenswrapper[5106]: W0320 00:15:22.006315 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdd7758c_5444_4836_adae_5613bdf96c2f.slice/crio-05656415cfbeaf047ab0fdf96ce20f99451afe1c1201b204485be1b6870af6b8 WatchSource:0}: Error finding container 05656415cfbeaf047ab0fdf96ce20f99451afe1c1201b204485be1b6870af6b8: Status 404 returned error can't find the container with id 05656415cfbeaf047ab0fdf96ce20f99451afe1c1201b204485be1b6870af6b8 Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.205257 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-znlnc"] Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.217607 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znlnc"] Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.217753 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.220630 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.395931 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a880c089-c934-4c9e-a478-0d9d53a55c81-utilities\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.395974 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a880c089-c934-4c9e-a478-0d9d53a55c81-catalog-content\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.396035 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pss2h\" (UniqueName: \"kubernetes.io/projected/a880c089-c934-4c9e-a478-0d9d53a55c81-kube-api-access-pss2h\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.498299 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pss2h\" (UniqueName: \"kubernetes.io/projected/a880c089-c934-4c9e-a478-0d9d53a55c81-kube-api-access-pss2h\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.499376 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a880c089-c934-4c9e-a478-0d9d53a55c81-utilities\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.499558 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a880c089-c934-4c9e-a478-0d9d53a55c81-catalog-content\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.499888 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a880c089-c934-4c9e-a478-0d9d53a55c81-utilities\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.501031 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a880c089-c934-4c9e-a478-0d9d53a55c81-catalog-content\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.520891 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pss2h\" (UniqueName: \"kubernetes.io/projected/a880c089-c934-4c9e-a478-0d9d53a55c81-kube-api-access-pss2h\") pod \"community-operators-znlnc\" (UID: \"a880c089-c934-4c9e-a478-0d9d53a55c81\") " pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.534044 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.682187 5106 generic.go:358] "Generic (PLEG): container finished" podID="4cfbf52b-0060-44e8-9485-e5c04de2ad60" containerID="03f8bd3aba70a82b31cf3041afbfad1bb456f9ccf4b6bc45e3d9f0ad3f0c7f12" exitCode=0 Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.682280 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25nb" event={"ID":"4cfbf52b-0060-44e8-9485-e5c04de2ad60","Type":"ContainerDied","Data":"03f8bd3aba70a82b31cf3041afbfad1bb456f9ccf4b6bc45e3d9f0ad3f0c7f12"} Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.685205 5106 generic.go:358] "Generic (PLEG): container finished" podID="cdd7758c-5444-4836-adae-5613bdf96c2f" containerID="ef181541ba88bafb419d5f9d1967806beef98336f5c4cda59d366220c6751e07" exitCode=0 Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.685666 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gvc7" event={"ID":"cdd7758c-5444-4836-adae-5613bdf96c2f","Type":"ContainerDied","Data":"ef181541ba88bafb419d5f9d1967806beef98336f5c4cda59d366220c6751e07"} Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.685790 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gvc7" event={"ID":"cdd7758c-5444-4836-adae-5613bdf96c2f","Type":"ContainerStarted","Data":"05656415cfbeaf047ab0fdf96ce20f99451afe1c1201b204485be1b6870af6b8"} Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.690474 5106 generic.go:358] "Generic (PLEG): container finished" podID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerID="580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e" exitCode=0 Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.690539 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerDied","Data":"580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e"} Mar 20 00:15:22 crc kubenswrapper[5106]: I0320 00:15:22.778733 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-znlnc"] Mar 20 00:15:22 crc kubenswrapper[5106]: W0320 00:15:22.783844 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda880c089_c934_4c9e_a478_0d9d53a55c81.slice/crio-83ebfb449e0d7d34f28a1733dc615f9ca61e6da15b1655bb8b5d2be70563cf59 WatchSource:0}: Error finding container 83ebfb449e0d7d34f28a1733dc615f9ca61e6da15b1655bb8b5d2be70563cf59: Status 404 returned error can't find the container with id 83ebfb449e0d7d34f28a1733dc615f9ca61e6da15b1655bb8b5d2be70563cf59 Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.700848 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s25nb" event={"ID":"4cfbf52b-0060-44e8-9485-e5c04de2ad60","Type":"ContainerStarted","Data":"dd8166b78f63422df4002c6cbd5f4ca2ad14fb23cb2ec0e66cbae700d286089c"} Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.704955 5106 generic.go:358] "Generic (PLEG): container finished" podID="a880c089-c934-4c9e-a478-0d9d53a55c81" containerID="fbc8ab91d11709a71e13467a64562f0d51469cef44e1b896a3e9483e6e5fce66" exitCode=0 Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.705051 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znlnc" event={"ID":"a880c089-c934-4c9e-a478-0d9d53a55c81","Type":"ContainerDied","Data":"fbc8ab91d11709a71e13467a64562f0d51469cef44e1b896a3e9483e6e5fce66"} Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.705113 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znlnc" event={"ID":"a880c089-c934-4c9e-a478-0d9d53a55c81","Type":"ContainerStarted","Data":"83ebfb449e0d7d34f28a1733dc615f9ca61e6da15b1655bb8b5d2be70563cf59"} Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.714419 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerStarted","Data":"9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce"} Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.729557 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s25nb" podStartSLOduration=5.027313174 podStartE2EDuration="5.729539592s" podCreationTimestamp="2026-03-20 00:15:18 +0000 UTC" firstStartedPulling="2026-03-20 00:15:20.661009888 +0000 UTC m=+375.094743942" lastFinishedPulling="2026-03-20 00:15:21.363236296 +0000 UTC m=+375.796970360" observedRunningTime="2026-03-20 00:15:23.724452633 +0000 UTC m=+378.158186697" watchObservedRunningTime="2026-03-20 00:15:23.729539592 +0000 UTC m=+378.163273646" Mar 20 00:15:23 crc kubenswrapper[5106]: I0320 00:15:23.765628 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjjqw" podStartSLOduration=4.046774877 podStartE2EDuration="4.765610485s" podCreationTimestamp="2026-03-20 00:15:19 +0000 UTC" firstStartedPulling="2026-03-20 00:15:21.672761612 +0000 UTC m=+376.106495686" lastFinishedPulling="2026-03-20 00:15:22.39159724 +0000 UTC m=+376.825331294" observedRunningTime="2026-03-20 00:15:23.761227174 +0000 UTC m=+378.194961238" watchObservedRunningTime="2026-03-20 00:15:23.765610485 +0000 UTC m=+378.199344549" Mar 20 00:15:24 crc kubenswrapper[5106]: I0320 00:15:24.725409 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znlnc" event={"ID":"a880c089-c934-4c9e-a478-0d9d53a55c81","Type":"ContainerStarted","Data":"1be16c3afa74c307654b83751d303df8ccb4b84d125183b00e9fe871c04e9750"} Mar 20 00:15:24 crc kubenswrapper[5106]: I0320 00:15:24.729289 5106 generic.go:358] "Generic (PLEG): container finished" podID="cdd7758c-5444-4836-adae-5613bdf96c2f" containerID="1e51fe302f0daba44cb8c7ea87e5406d463c48f48d5c0f3a5571f12d9873d50c" exitCode=0 Mar 20 00:15:24 crc kubenswrapper[5106]: I0320 00:15:24.729392 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gvc7" event={"ID":"cdd7758c-5444-4836-adae-5613bdf96c2f","Type":"ContainerDied","Data":"1e51fe302f0daba44cb8c7ea87e5406d463c48f48d5c0f3a5571f12d9873d50c"} Mar 20 00:15:25 crc kubenswrapper[5106]: I0320 00:15:25.736371 5106 generic.go:358] "Generic (PLEG): container finished" podID="a880c089-c934-4c9e-a478-0d9d53a55c81" containerID="1be16c3afa74c307654b83751d303df8ccb4b84d125183b00e9fe871c04e9750" exitCode=0 Mar 20 00:15:25 crc kubenswrapper[5106]: I0320 00:15:25.736524 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znlnc" event={"ID":"a880c089-c934-4c9e-a478-0d9d53a55c81","Type":"ContainerDied","Data":"1be16c3afa74c307654b83751d303df8ccb4b84d125183b00e9fe871c04e9750"} Mar 20 00:15:25 crc kubenswrapper[5106]: I0320 00:15:25.736548 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-znlnc" event={"ID":"a880c089-c934-4c9e-a478-0d9d53a55c81","Type":"ContainerStarted","Data":"178d3bded5dee6a89801e568cb09b7d00273f62e530bd92182e777eb137406e9"} Mar 20 00:15:25 crc kubenswrapper[5106]: I0320 00:15:25.739483 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gvc7" event={"ID":"cdd7758c-5444-4836-adae-5613bdf96c2f","Type":"ContainerStarted","Data":"cf5ab0ff95e1c5ff666e2f72ab95522423b3c708862780692380256916c29dc4"} Mar 20 00:15:25 crc kubenswrapper[5106]: I0320 00:15:25.763324 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-znlnc" podStartSLOduration=3.097892113 podStartE2EDuration="3.763310168s" podCreationTimestamp="2026-03-20 00:15:22 +0000 UTC" firstStartedPulling="2026-03-20 00:15:23.705701788 +0000 UTC m=+378.139435842" lastFinishedPulling="2026-03-20 00:15:24.371119843 +0000 UTC m=+378.804853897" observedRunningTime="2026-03-20 00:15:25.759344237 +0000 UTC m=+380.193078291" watchObservedRunningTime="2026-03-20 00:15:25.763310168 +0000 UTC m=+380.197044222" Mar 20 00:15:25 crc kubenswrapper[5106]: I0320 00:15:25.780933 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6gvc7" podStartSLOduration=3.881575656 podStartE2EDuration="4.780916473s" podCreationTimestamp="2026-03-20 00:15:21 +0000 UTC" firstStartedPulling="2026-03-20 00:15:22.686448395 +0000 UTC m=+377.120182449" lastFinishedPulling="2026-03-20 00:15:23.585789212 +0000 UTC m=+378.019523266" observedRunningTime="2026-03-20 00:15:25.777072626 +0000 UTC m=+380.210806680" watchObservedRunningTime="2026-03-20 00:15:25.780916473 +0000 UTC m=+380.214650537" Mar 20 00:15:29 crc kubenswrapper[5106]: I0320 00:15:29.143140 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:29 crc kubenswrapper[5106]: I0320 00:15:29.143661 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:29 crc kubenswrapper[5106]: I0320 00:15:29.201133 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:29 crc kubenswrapper[5106]: I0320 00:15:29.833784 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s25nb" Mar 20 00:15:30 crc kubenswrapper[5106]: I0320 00:15:30.133947 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:30 crc kubenswrapper[5106]: I0320 00:15:30.136073 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:30 crc kubenswrapper[5106]: I0320 00:15:30.186844 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:30 crc kubenswrapper[5106]: I0320 00:15:30.821165 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:15:31 crc kubenswrapper[5106]: I0320 00:15:31.525189 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:31 crc kubenswrapper[5106]: I0320 00:15:31.525240 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:31 crc kubenswrapper[5106]: I0320 00:15:31.571386 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:31 crc kubenswrapper[5106]: I0320 00:15:31.816831 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6gvc7" Mar 20 00:15:32 crc kubenswrapper[5106]: I0320 00:15:32.534183 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:32 crc kubenswrapper[5106]: I0320 00:15:32.534836 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:32 crc kubenswrapper[5106]: I0320 00:15:32.581250 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:32 crc kubenswrapper[5106]: I0320 00:15:32.851515 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-znlnc" Mar 20 00:15:55 crc kubenswrapper[5106]: I0320 00:15:55.373240 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:15:55 crc kubenswrapper[5106]: I0320 00:15:55.373803 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.133641 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566096-hb8rr"] Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.139687 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.142307 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.142472 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.142809 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.155352 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566096-hb8rr"] Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.224883 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4nb\" (UniqueName: \"kubernetes.io/projected/a2b27eda-de8d-498f-b0e2-67e1c6aafd18-kube-api-access-xq4nb\") pod \"auto-csr-approver-29566096-hb8rr\" (UID: \"a2b27eda-de8d-498f-b0e2-67e1c6aafd18\") " pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.326228 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4nb\" (UniqueName: \"kubernetes.io/projected/a2b27eda-de8d-498f-b0e2-67e1c6aafd18-kube-api-access-xq4nb\") pod \"auto-csr-approver-29566096-hb8rr\" (UID: \"a2b27eda-de8d-498f-b0e2-67e1c6aafd18\") " pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.355859 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4nb\" (UniqueName: \"kubernetes.io/projected/a2b27eda-de8d-498f-b0e2-67e1c6aafd18-kube-api-access-xq4nb\") pod \"auto-csr-approver-29566096-hb8rr\" (UID: \"a2b27eda-de8d-498f-b0e2-67e1c6aafd18\") " pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.469197 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.698747 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566096-hb8rr"] Mar 20 00:16:00 crc kubenswrapper[5106]: I0320 00:16:00.952541 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" event={"ID":"a2b27eda-de8d-498f-b0e2-67e1c6aafd18","Type":"ContainerStarted","Data":"8c4078af0a0a6e64036ffa2d6e72291796c777ce610f0254d04555abe6580baa"} Mar 20 00:16:02 crc kubenswrapper[5106]: I0320 00:16:02.970158 5106 generic.go:358] "Generic (PLEG): container finished" podID="a2b27eda-de8d-498f-b0e2-67e1c6aafd18" containerID="42a205c7541758bf07cecb27cdf77e17342aa4826950f1106aa1ad6c1004fd0b" exitCode=0 Mar 20 00:16:02 crc kubenswrapper[5106]: I0320 00:16:02.970235 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" event={"ID":"a2b27eda-de8d-498f-b0e2-67e1c6aafd18","Type":"ContainerDied","Data":"42a205c7541758bf07cecb27cdf77e17342aa4826950f1106aa1ad6c1004fd0b"} Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.287717 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.405759 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq4nb\" (UniqueName: \"kubernetes.io/projected/a2b27eda-de8d-498f-b0e2-67e1c6aafd18-kube-api-access-xq4nb\") pod \"a2b27eda-de8d-498f-b0e2-67e1c6aafd18\" (UID: \"a2b27eda-de8d-498f-b0e2-67e1c6aafd18\") " Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.414797 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b27eda-de8d-498f-b0e2-67e1c6aafd18-kube-api-access-xq4nb" (OuterVolumeSpecName: "kube-api-access-xq4nb") pod "a2b27eda-de8d-498f-b0e2-67e1c6aafd18" (UID: "a2b27eda-de8d-498f-b0e2-67e1c6aafd18"). InnerVolumeSpecName "kube-api-access-xq4nb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.507745 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xq4nb\" (UniqueName: \"kubernetes.io/projected/a2b27eda-de8d-498f-b0e2-67e1c6aafd18-kube-api-access-xq4nb\") on node \"crc\" DevicePath \"\"" Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.984956 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.984998 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566096-hb8rr" event={"ID":"a2b27eda-de8d-498f-b0e2-67e1c6aafd18","Type":"ContainerDied","Data":"8c4078af0a0a6e64036ffa2d6e72291796c777ce610f0254d04555abe6580baa"} Mar 20 00:16:04 crc kubenswrapper[5106]: I0320 00:16:04.985679 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4078af0a0a6e64036ffa2d6e72291796c777ce610f0254d04555abe6580baa" Mar 20 00:16:25 crc kubenswrapper[5106]: I0320 00:16:25.373890 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:16:25 crc kubenswrapper[5106]: I0320 00:16:25.375700 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:16:55 crc kubenswrapper[5106]: I0320 00:16:55.373464 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:16:55 crc kubenswrapper[5106]: I0320 00:16:55.375052 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:16:55 crc kubenswrapper[5106]: I0320 00:16:55.375174 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:16:55 crc kubenswrapper[5106]: I0320 00:16:55.375952 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6663498fde38077516653246979f4890e53ab8554f504d980573cec239ee48c3"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:16:55 crc kubenswrapper[5106]: I0320 00:16:55.376100 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://6663498fde38077516653246979f4890e53ab8554f504d980573cec239ee48c3" gracePeriod=600 Mar 20 00:16:56 crc kubenswrapper[5106]: I0320 00:16:56.399628 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="6663498fde38077516653246979f4890e53ab8554f504d980573cec239ee48c3" exitCode=0 Mar 20 00:16:56 crc kubenswrapper[5106]: I0320 00:16:56.399779 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"6663498fde38077516653246979f4890e53ab8554f504d980573cec239ee48c3"} Mar 20 00:16:56 crc kubenswrapper[5106]: I0320 00:16:56.400722 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"1228d087c7bde3c99c7452feeb09cc740b7b75ef32a544f2a368d4a749bf059b"} Mar 20 00:16:56 crc kubenswrapper[5106]: I0320 00:16:56.400784 5106 scope.go:117] "RemoveContainer" containerID="e305e307099c05996c1326f05d1414ce358ed6c0ec58221736b93d0a4312344c" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.151973 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566098-4pwtk"] Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.153822 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a2b27eda-de8d-498f-b0e2-67e1c6aafd18" containerName="oc" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.153848 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b27eda-de8d-498f-b0e2-67e1c6aafd18" containerName="oc" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.154029 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="a2b27eda-de8d-498f-b0e2-67e1c6aafd18" containerName="oc" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.160917 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.163032 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566098-4pwtk"] Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.164904 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.165252 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.165464 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.273324 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrjn\" (UniqueName: \"kubernetes.io/projected/a06fbd90-04f2-40c9-9465-67cb7a1fdda4-kube-api-access-hnrjn\") pod \"auto-csr-approver-29566098-4pwtk\" (UID: \"a06fbd90-04f2-40c9-9465-67cb7a1fdda4\") " pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.375199 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrjn\" (UniqueName: \"kubernetes.io/projected/a06fbd90-04f2-40c9-9465-67cb7a1fdda4-kube-api-access-hnrjn\") pod \"auto-csr-approver-29566098-4pwtk\" (UID: \"a06fbd90-04f2-40c9-9465-67cb7a1fdda4\") " pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.405073 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrjn\" (UniqueName: \"kubernetes.io/projected/a06fbd90-04f2-40c9-9465-67cb7a1fdda4-kube-api-access-hnrjn\") pod \"auto-csr-approver-29566098-4pwtk\" (UID: \"a06fbd90-04f2-40c9-9465-67cb7a1fdda4\") " pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.503174 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:00 crc kubenswrapper[5106]: I0320 00:18:00.936801 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566098-4pwtk"] Mar 20 00:18:01 crc kubenswrapper[5106]: I0320 00:18:01.835097 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" event={"ID":"a06fbd90-04f2-40c9-9465-67cb7a1fdda4","Type":"ContainerStarted","Data":"a0e4ab13ddcabc971d16bcf84fc5a5c1ab8c8f5481e0fd739ecfa3c3f37d0d69"} Mar 20 00:18:02 crc kubenswrapper[5106]: I0320 00:18:02.850284 5106 generic.go:358] "Generic (PLEG): container finished" podID="a06fbd90-04f2-40c9-9465-67cb7a1fdda4" containerID="64e6c72363c74d1fca1e26ec49ee0ff2c3ee760170974a8a7903d02be12ddfd2" exitCode=0 Mar 20 00:18:02 crc kubenswrapper[5106]: I0320 00:18:02.850478 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" event={"ID":"a06fbd90-04f2-40c9-9465-67cb7a1fdda4","Type":"ContainerDied","Data":"64e6c72363c74d1fca1e26ec49ee0ff2c3ee760170974a8a7903d02be12ddfd2"} Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.052246 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.125824 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnrjn\" (UniqueName: \"kubernetes.io/projected/a06fbd90-04f2-40c9-9465-67cb7a1fdda4-kube-api-access-hnrjn\") pod \"a06fbd90-04f2-40c9-9465-67cb7a1fdda4\" (UID: \"a06fbd90-04f2-40c9-9465-67cb7a1fdda4\") " Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.132594 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06fbd90-04f2-40c9-9465-67cb7a1fdda4-kube-api-access-hnrjn" (OuterVolumeSpecName: "kube-api-access-hnrjn") pod "a06fbd90-04f2-40c9-9465-67cb7a1fdda4" (UID: "a06fbd90-04f2-40c9-9465-67cb7a1fdda4"). InnerVolumeSpecName "kube-api-access-hnrjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.227366 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnrjn\" (UniqueName: \"kubernetes.io/projected/a06fbd90-04f2-40c9-9465-67cb7a1fdda4-kube-api-access-hnrjn\") on node \"crc\" DevicePath \"\"" Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.865716 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" event={"ID":"a06fbd90-04f2-40c9-9465-67cb7a1fdda4","Type":"ContainerDied","Data":"a0e4ab13ddcabc971d16bcf84fc5a5c1ab8c8f5481e0fd739ecfa3c3f37d0d69"} Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.865759 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e4ab13ddcabc971d16bcf84fc5a5c1ab8c8f5481e0fd739ecfa3c3f37d0d69" Mar 20 00:18:04 crc kubenswrapper[5106]: I0320 00:18:04.865807 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566098-4pwtk" Mar 20 00:18:05 crc kubenswrapper[5106]: I0320 00:18:05.114321 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566092-6knrz"] Mar 20 00:18:05 crc kubenswrapper[5106]: I0320 00:18:05.118277 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566092-6knrz"] Mar 20 00:18:05 crc kubenswrapper[5106]: I0320 00:18:05.168442 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c58c24-f3dc-45d1-bf1f-1a679ae95553" path="/var/lib/kubelet/pods/92c58c24-f3dc-45d1-bf1f-1a679ae95553/volumes" Mar 20 00:18:55 crc kubenswrapper[5106]: I0320 00:18:55.373676 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:18:55 crc kubenswrapper[5106]: I0320 00:18:55.374394 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:19:07 crc kubenswrapper[5106]: I0320 00:19:07.481483 5106 scope.go:117] "RemoveContainer" containerID="23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d" Mar 20 00:19:07 crc kubenswrapper[5106]: I0320 00:19:07.513470 5106 scope.go:117] "RemoveContainer" containerID="555cf5368ac0caa29bf7158992d54da737b48532e35d08e7d764c83fd4aa8e55" Mar 20 00:19:07 crc kubenswrapper[5106]: I0320 00:19:07.519759 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:19:07 crc kubenswrapper[5106]: E0320 00:19:07.532314 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d\": container with ID starting with 23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d not found: ID does not exist" containerID="23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d" Mar 20 00:19:07 crc kubenswrapper[5106]: I0320 00:19:07.532873 5106 image_gc_manager.go:222] "Failed to monitor images" err="get container status: runtime container status: rpc error: code = NotFound desc = could not find container \"23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d\": container with ID starting with 23ba0a3f0411521039770c9b2a262483c546c2e681d96ed1c8eaaac220e72e5d not found: ID does not exist" Mar 20 00:19:07 crc kubenswrapper[5106]: I0320 00:19:07.546618 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:19:07 crc kubenswrapper[5106]: I0320 00:19:07.580091 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:19:25 crc kubenswrapper[5106]: I0320 00:19:25.374115 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:19:25 crc kubenswrapper[5106]: I0320 00:19:25.374735 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.373959 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.374670 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.374727 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.375435 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1228d087c7bde3c99c7452feeb09cc740b7b75ef32a544f2a368d4a749bf059b"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.375499 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://1228d087c7bde3c99c7452feeb09cc740b7b75ef32a544f2a368d4a749bf059b" gracePeriod=600 Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.506241 5106 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.576485 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="1228d087c7bde3c99c7452feeb09cc740b7b75ef32a544f2a368d4a749bf059b" exitCode=0 Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.576600 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"1228d087c7bde3c99c7452feeb09cc740b7b75ef32a544f2a368d4a749bf059b"} Mar 20 00:19:55 crc kubenswrapper[5106]: I0320 00:19:55.576676 5106 scope.go:117] "RemoveContainer" containerID="6663498fde38077516653246979f4890e53ab8554f504d980573cec239ee48c3" Mar 20 00:19:56 crc kubenswrapper[5106]: I0320 00:19:56.585158 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"b9698c7bd4bd271067cba47912a53b2331be94e66a7a5d4468da4bc263f23f37"} Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.140269 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566100-rlzj4"] Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.141495 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a06fbd90-04f2-40c9-9465-67cb7a1fdda4" containerName="oc" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.141509 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06fbd90-04f2-40c9-9465-67cb7a1fdda4" containerName="oc" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.141679 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="a06fbd90-04f2-40c9-9465-67cb7a1fdda4" containerName="oc" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.152479 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566100-rlzj4"] Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.152634 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.156078 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.156136 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.156219 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.261757 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhcq2\" (UniqueName: \"kubernetes.io/projected/da609d33-74de-4b65-8e69-9f577e0f3605-kube-api-access-jhcq2\") pod \"auto-csr-approver-29566100-rlzj4\" (UID: \"da609d33-74de-4b65-8e69-9f577e0f3605\") " pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.363268 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jhcq2\" (UniqueName: \"kubernetes.io/projected/da609d33-74de-4b65-8e69-9f577e0f3605-kube-api-access-jhcq2\") pod \"auto-csr-approver-29566100-rlzj4\" (UID: \"da609d33-74de-4b65-8e69-9f577e0f3605\") " pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.382736 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhcq2\" (UniqueName: \"kubernetes.io/projected/da609d33-74de-4b65-8e69-9f577e0f3605-kube-api-access-jhcq2\") pod \"auto-csr-approver-29566100-rlzj4\" (UID: \"da609d33-74de-4b65-8e69-9f577e0f3605\") " pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.471468 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:00 crc kubenswrapper[5106]: I0320 00:20:00.687896 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566100-rlzj4"] Mar 20 00:20:01 crc kubenswrapper[5106]: I0320 00:20:01.613885 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" event={"ID":"da609d33-74de-4b65-8e69-9f577e0f3605","Type":"ContainerStarted","Data":"216da56e30c18ebca9179afad77c4aaa22f99b4895dd83042f39f3f7ba626e4f"} Mar 20 00:20:02 crc kubenswrapper[5106]: I0320 00:20:02.621080 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" event={"ID":"da609d33-74de-4b65-8e69-9f577e0f3605","Type":"ContainerStarted","Data":"4dccf8ae829d970d52e52f61edf719b67b9506e7b42bd8575132d164c2af7193"} Mar 20 00:20:02 crc kubenswrapper[5106]: I0320 00:20:02.640917 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" podStartSLOduration=1.220131142 podStartE2EDuration="2.64089577s" podCreationTimestamp="2026-03-20 00:20:00 +0000 UTC" firstStartedPulling="2026-03-20 00:20:00.694106008 +0000 UTC m=+655.127840062" lastFinishedPulling="2026-03-20 00:20:02.114870626 +0000 UTC m=+656.548604690" observedRunningTime="2026-03-20 00:20:02.637937184 +0000 UTC m=+657.071671238" watchObservedRunningTime="2026-03-20 00:20:02.64089577 +0000 UTC m=+657.074629824" Mar 20 00:20:03 crc kubenswrapper[5106]: I0320 00:20:03.628482 5106 generic.go:358] "Generic (PLEG): container finished" podID="da609d33-74de-4b65-8e69-9f577e0f3605" containerID="4dccf8ae829d970d52e52f61edf719b67b9506e7b42bd8575132d164c2af7193" exitCode=0 Mar 20 00:20:03 crc kubenswrapper[5106]: I0320 00:20:03.628604 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" event={"ID":"da609d33-74de-4b65-8e69-9f577e0f3605","Type":"ContainerDied","Data":"4dccf8ae829d970d52e52f61edf719b67b9506e7b42bd8575132d164c2af7193"} Mar 20 00:20:04 crc kubenswrapper[5106]: I0320 00:20:04.888644 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.036563 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhcq2\" (UniqueName: \"kubernetes.io/projected/da609d33-74de-4b65-8e69-9f577e0f3605-kube-api-access-jhcq2\") pod \"da609d33-74de-4b65-8e69-9f577e0f3605\" (UID: \"da609d33-74de-4b65-8e69-9f577e0f3605\") " Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.045820 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da609d33-74de-4b65-8e69-9f577e0f3605-kube-api-access-jhcq2" (OuterVolumeSpecName: "kube-api-access-jhcq2") pod "da609d33-74de-4b65-8e69-9f577e0f3605" (UID: "da609d33-74de-4b65-8e69-9f577e0f3605"). InnerVolumeSpecName "kube-api-access-jhcq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.138101 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jhcq2\" (UniqueName: \"kubernetes.io/projected/da609d33-74de-4b65-8e69-9f577e0f3605-kube-api-access-jhcq2\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.647142 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.647175 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566100-rlzj4" event={"ID":"da609d33-74de-4b65-8e69-9f577e0f3605","Type":"ContainerDied","Data":"216da56e30c18ebca9179afad77c4aaa22f99b4895dd83042f39f3f7ba626e4f"} Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.647226 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216da56e30c18ebca9179afad77c4aaa22f99b4895dd83042f39f3f7ba626e4f" Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.685514 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566094-5fcbx"] Mar 20 00:20:05 crc kubenswrapper[5106]: I0320 00:20:05.688589 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566094-5fcbx"] Mar 20 00:20:07 crc kubenswrapper[5106]: I0320 00:20:07.174225 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e82d111-0784-4d7b-baf1-02bd935d69e6" path="/var/lib/kubelet/pods/0e82d111-0784-4d7b-baf1-02bd935d69e6/volumes" Mar 20 00:20:07 crc kubenswrapper[5106]: I0320 00:20:07.656730 5106 scope.go:117] "RemoveContainer" containerID="64f2cb73e924e97def5a675deff985559984e33c75735c57fe4632a9b205af9d" Mar 20 00:20:09 crc kubenswrapper[5106]: I0320 00:20:09.845052 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc"] Mar 20 00:20:09 crc kubenswrapper[5106]: I0320 00:20:09.845335 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="kube-rbac-proxy" containerID="cri-o://84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65" gracePeriod=30 Mar 20 00:20:09 crc kubenswrapper[5106]: I0320 00:20:09.845453 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="ovnkube-cluster-manager" containerID="cri-o://b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.068354 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvw6r"] Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.068865 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-controller" containerID="cri-o://2b518e312797761600d953f4d2468ed5a689003063f65aac80dfc2d4e3197641" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.068933 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2ddd9af58aa57b0d38a952b32ea235cc71190518291c29253037899f6abe3436" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.069025 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="northd" containerID="cri-o://e60c16dea81b002da38f6e74a1183aae0d68d5ec2c0f76342944bc4a73fdae4c" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.069082 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-node" containerID="cri-o://05fffb60827beb7046e691cc7177ed8b7993dd8d1fd1d950c15861a7134a589f" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.069071 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="sbdb" containerID="cri-o://6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.069172 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-acl-logging" containerID="cri-o://6c59b9743060c37ccc6998ad273851bf70a36a19866d8a37f385a982d31a58df" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.068980 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="nbdb" containerID="cri-o://fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.105116 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovnkube-controller" containerID="cri-o://0c085f3e5a57eee1a558eb14c8d707dd271557ce447c84bbcd4949881723922b" gracePeriod=30 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.535219 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.566669 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq"] Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567305 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="da609d33-74de-4b65-8e69-9f577e0f3605" containerName="oc" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567326 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="da609d33-74de-4b65-8e69-9f577e0f3605" containerName="oc" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567347 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="ovnkube-cluster-manager" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567356 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="ovnkube-cluster-manager" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567379 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="kube-rbac-proxy" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567386 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="kube-rbac-proxy" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567496 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="da609d33-74de-4b65-8e69-9f577e0f3605" containerName="oc" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567516 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="ovnkube-cluster-manager" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.567528 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerName="kube-rbac-proxy" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.571006 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.595850 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11 is running failed: container process not found" containerID="6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.595916 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab is running failed: container process not found" containerID="fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.596340 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11 is running failed: container process not found" containerID="6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.596459 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab is running failed: container process not found" containerID="fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.596871 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab is running failed: container process not found" containerID="fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.596919 5106 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="nbdb" probeResult="unknown" Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.596929 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11 is running failed: container process not found" containerID="6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.596970 5106 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="sbdb" probeResult="unknown" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626224 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovnkube-config\") pod \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626320 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lh8x\" (UniqueName: \"kubernetes.io/projected/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-kube-api-access-8lh8x\") pod \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626356 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-env-overrides\") pod \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626530 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovn-control-plane-metrics-cert\") pod \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\" (UID: \"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626815 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k29dt\" (UniqueName: \"kubernetes.io/projected/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-kube-api-access-k29dt\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626863 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.626940 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.627021 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.627243 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" (UID: "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.627285 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" (UID: "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.631966 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" (UID: "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.632090 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-kube-api-access-8lh8x" (OuterVolumeSpecName: "kube-api-access-8lh8x") pod "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" (UID: "60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe"). InnerVolumeSpecName "kube-api-access-8lh8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.692099 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvw6r_99795294-4844-44e8-b55b-998323bd4f6e/ovn-acl-logging/0.log" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693089 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvw6r_99795294-4844-44e8-b55b-998323bd4f6e/ovn-controller/0.log" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693498 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="0c085f3e5a57eee1a558eb14c8d707dd271557ce447c84bbcd4949881723922b" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693531 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693541 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693548 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="e60c16dea81b002da38f6e74a1183aae0d68d5ec2c0f76342944bc4a73fdae4c" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693554 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="2ddd9af58aa57b0d38a952b32ea235cc71190518291c29253037899f6abe3436" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693544 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"0c085f3e5a57eee1a558eb14c8d707dd271557ce447c84bbcd4949881723922b"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693614 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693640 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693561 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="05fffb60827beb7046e691cc7177ed8b7993dd8d1fd1d950c15861a7134a589f" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693680 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="6c59b9743060c37ccc6998ad273851bf70a36a19866d8a37f385a982d31a58df" exitCode=143 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693699 5106 generic.go:358] "Generic (PLEG): container finished" podID="99795294-4844-44e8-b55b-998323bd4f6e" containerID="2b518e312797761600d953f4d2468ed5a689003063f65aac80dfc2d4e3197641" exitCode=143 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693660 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"e60c16dea81b002da38f6e74a1183aae0d68d5ec2c0f76342944bc4a73fdae4c"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693769 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"2ddd9af58aa57b0d38a952b32ea235cc71190518291c29253037899f6abe3436"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693783 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"05fffb60827beb7046e691cc7177ed8b7993dd8d1fd1d950c15861a7134a589f"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693794 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"6c59b9743060c37ccc6998ad273851bf70a36a19866d8a37f385a982d31a58df"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.693804 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"2b518e312797761600d953f4d2468ed5a689003063f65aac80dfc2d4e3197641"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.696415 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xtksh_9da3e0a0-f6ab-4f57-925e-c59772b3d6d9/kube-multus/0.log" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.696470 5106 generic.go:358] "Generic (PLEG): container finished" podID="9da3e0a0-f6ab-4f57-925e-c59772b3d6d9" containerID="4884a24b5e56e4fa296eff21cdf419b0193f65bffeaf8fcd6a1ad11c289ae430" exitCode=2 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.696505 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xtksh" event={"ID":"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9","Type":"ContainerDied","Data":"4884a24b5e56e4fa296eff21cdf419b0193f65bffeaf8fcd6a1ad11c289ae430"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.697428 5106 scope.go:117] "RemoveContainer" containerID="4884a24b5e56e4fa296eff21cdf419b0193f65bffeaf8fcd6a1ad11c289ae430" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.703991 5106 generic.go:358] "Generic (PLEG): container finished" podID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerID="b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.704020 5106 generic.go:358] "Generic (PLEG): container finished" podID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" containerID="84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65" exitCode=0 Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.704111 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.704134 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" event={"ID":"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe","Type":"ContainerDied","Data":"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.704180 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" event={"ID":"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe","Type":"ContainerDied","Data":"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.704198 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc" event={"ID":"60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe","Type":"ContainerDied","Data":"4ee47e85fe34a9bc0d0f079f2b85da175c22f3c46bd033232e940b716040e386"} Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.704221 5106 scope.go:117] "RemoveContainer" containerID="b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.720340 5106 scope.go:117] "RemoveContainer" containerID="84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.728433 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.728496 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.729848 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k29dt\" (UniqueName: \"kubernetes.io/projected/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-kube-api-access-k29dt\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.729925 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.730042 5106 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.730141 5106 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.730177 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8lh8x\" (UniqueName: \"kubernetes.io/projected/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-kube-api-access-8lh8x\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.730188 5106 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.731355 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.731395 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.734913 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.750106 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k29dt\" (UniqueName: \"kubernetes.io/projected/eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe-kube-api-access-k29dt\") pod \"ovnkube-control-plane-97c9b6c48-f9vgq\" (UID: \"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.754542 5106 scope.go:117] "RemoveContainer" containerID="b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef" Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.755165 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef\": container with ID starting with b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef not found: ID does not exist" containerID="b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.755201 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef"} err="failed to get container status \"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef\": rpc error: code = NotFound desc = could not find container \"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef\": container with ID starting with b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef not found: ID does not exist" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.755226 5106 scope.go:117] "RemoveContainer" containerID="84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65" Mar 20 00:20:10 crc kubenswrapper[5106]: E0320 00:20:10.755791 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65\": container with ID starting with 84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65 not found: ID does not exist" containerID="84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.755844 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65"} err="failed to get container status \"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65\": rpc error: code = NotFound desc = could not find container \"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65\": container with ID starting with 84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65 not found: ID does not exist" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.755865 5106 scope.go:117] "RemoveContainer" containerID="b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.756474 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef"} err="failed to get container status \"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef\": rpc error: code = NotFound desc = could not find container \"b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef\": container with ID starting with b9002ef82affff95cbeb2d0afde691cb803bc520f8e9f46586ea8f8c255ac6ef not found: ID does not exist" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.756550 5106 scope.go:117] "RemoveContainer" containerID="84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.756912 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65"} err="failed to get container status \"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65\": rpc error: code = NotFound desc = could not find container \"84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65\": container with ID starting with 84ed898c3b8b2d356641ee6f838d512a0fa0cf3738b8fefe34621d4eba5d6f65 not found: ID does not exist" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.758055 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc"] Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.768683 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-trcsc"] Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.852705 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvw6r_99795294-4844-44e8-b55b-998323bd4f6e/ovn-acl-logging/0.log" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.853178 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvw6r_99795294-4844-44e8-b55b-998323bd4f6e/ovn-controller/0.log" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.853481 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.891456 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.911260 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bnsjq"] Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912593 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912622 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912638 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="sbdb" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912646 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="sbdb" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912669 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="northd" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912679 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="northd" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912690 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-controller" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912698 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-controller" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912707 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-node" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912715 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-node" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912724 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="nbdb" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912731 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="nbdb" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912741 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kubecfg-setup" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912749 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kubecfg-setup" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912769 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-acl-logging" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912776 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-acl-logging" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912785 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovnkube-controller" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912792 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovnkube-controller" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912920 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912937 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-acl-logging" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912947 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="sbdb" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912958 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="kube-rbac-proxy-node" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912969 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="nbdb" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912977 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovn-controller" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912986 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="northd" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.912995 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="99795294-4844-44e8-b55b-998323bd4f6e" containerName="ovnkube-controller" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.922144 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932108 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932188 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-systemd-units\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932220 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-env-overrides\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932262 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-var-lib-openvswitch\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932211 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932302 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-node-log\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932238 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932317 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-etc-openvswitch\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932343 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99795294-4844-44e8-b55b-998323bd4f6e-ovn-node-metrics-cert\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932367 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-log-socket\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932366 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932400 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-openvswitch\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932399 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932417 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-kubelet\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932425 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-node-log" (OuterVolumeSpecName: "node-log") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932448 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-ovn\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932501 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-bin\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932522 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rszfl\" (UniqueName: \"kubernetes.io/projected/99795294-4844-44e8-b55b-998323bd4f6e-kube-api-access-rszfl\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932550 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-ovn-kubernetes\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932592 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-systemd\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932613 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-config\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932629 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-netns\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932647 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-script-lib\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932687 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-netd\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932704 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-slash\") pod \"99795294-4844-44e8-b55b-998323bd4f6e\" (UID: \"99795294-4844-44e8-b55b-998323bd4f6e\") " Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932876 5106 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932888 5106 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932897 5106 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932906 5106 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-node-log\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932914 5106 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932448 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-log-socket" (OuterVolumeSpecName: "log-socket") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932988 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.933012 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932943 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-slash" (OuterVolumeSpecName: "host-slash") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932960 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932974 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.932988 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.933077 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.933475 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.933522 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.933488 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.933796 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.937795 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99795294-4844-44e8-b55b-998323bd4f6e-kube-api-access-rszfl" (OuterVolumeSpecName: "kube-api-access-rszfl") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "kube-api-access-rszfl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.943739 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99795294-4844-44e8-b55b-998323bd4f6e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:20:10 crc kubenswrapper[5106]: I0320 00:20:10.946037 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "99795294-4844-44e8-b55b-998323bd4f6e" (UID: "99795294-4844-44e8-b55b-998323bd4f6e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034049 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-run-netns\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034612 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034638 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-systemd-units\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034655 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-node-log\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034670 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-slash\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034687 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-ovn\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034707 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-cni-bin\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034728 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-cni-netd\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034748 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-ovnkube-config\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034769 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2118d519-1020-40a2-b32b-80f66d83c815-ovn-node-metrics-cert\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034787 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-systemd\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034815 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-ovnkube-script-lib\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034834 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034875 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-log-socket\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034898 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-kubelet\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034917 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-var-lib-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034942 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-etc-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034962 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q746l\" (UniqueName: \"kubernetes.io/projected/2118d519-1020-40a2-b32b-80f66d83c815-kube-api-access-q746l\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.034999 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035026 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-env-overrides\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035066 5106 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/99795294-4844-44e8-b55b-998323bd4f6e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035077 5106 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-log-socket\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035087 5106 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035096 5106 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035105 5106 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035116 5106 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035125 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rszfl\" (UniqueName: \"kubernetes.io/projected/99795294-4844-44e8-b55b-998323bd4f6e-kube-api-access-rszfl\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035135 5106 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035145 5106 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035154 5106 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035164 5106 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035173 5106 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035182 5106 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035191 5106 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/99795294-4844-44e8-b55b-998323bd4f6e-host-slash\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.035199 5106 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/99795294-4844-44e8-b55b-998323bd4f6e-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136128 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-run-netns\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136188 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136214 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-systemd-units\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136238 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-node-log\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136258 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-slash\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136277 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-ovn\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136301 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-cni-bin\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136305 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-run-netns\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136326 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-cni-netd\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136381 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-cni-netd\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136423 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-ovnkube-config\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136437 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-run-ovn-kubernetes\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136486 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2118d519-1020-40a2-b32b-80f66d83c815-ovn-node-metrics-cert\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136544 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-systemd\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136630 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-ovnkube-script-lib\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136667 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136859 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-log-socket\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136937 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136953 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-slash\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136968 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-systemd\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136984 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-node-log\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136491 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-systemd-units\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.136986 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-ovn\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.137007 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-cni-bin\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.137008 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-kubelet\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.137018 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-log-socket\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.137038 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-host-kubelet\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.137753 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-ovnkube-config\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.137780 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-ovnkube-script-lib\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138100 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-var-lib-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138174 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-var-lib-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138181 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-etc-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138204 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-etc-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138229 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q746l\" (UniqueName: \"kubernetes.io/projected/2118d519-1020-40a2-b32b-80f66d83c815-kube-api-access-q746l\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138317 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138367 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-env-overrides\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138747 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2118d519-1020-40a2-b32b-80f66d83c815-env-overrides\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.138762 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2118d519-1020-40a2-b32b-80f66d83c815-run-openvswitch\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.142432 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2118d519-1020-40a2-b32b-80f66d83c815-ovn-node-metrics-cert\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.163771 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q746l\" (UniqueName: \"kubernetes.io/projected/2118d519-1020-40a2-b32b-80f66d83c815-kube-api-access-q746l\") pod \"ovnkube-node-bnsjq\" (UID: \"2118d519-1020-40a2-b32b-80f66d83c815\") " pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.168071 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe" path="/var/lib/kubelet/pods/60b4e0cb-0c7a-4a61-8c4c-2075e7bf2ebe/volumes" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.242619 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:11 crc kubenswrapper[5106]: W0320 00:20:11.265236 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2118d519_1020_40a2_b32b_80f66d83c815.slice/crio-fb2531dab1533843133b7bb9cd0e4ba1e47955b823ea46ac915a418df0a28439 WatchSource:0}: Error finding container fb2531dab1533843133b7bb9cd0e4ba1e47955b823ea46ac915a418df0a28439: Status 404 returned error can't find the container with id fb2531dab1533843133b7bb9cd0e4ba1e47955b823ea46ac915a418df0a28439 Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.712042 5106 generic.go:358] "Generic (PLEG): container finished" podID="2118d519-1020-40a2-b32b-80f66d83c815" containerID="44c092c40435be8f797756c18806df5ca0c4e5cc1e1950fcb32e1ffe3d4d613a" exitCode=0 Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.712226 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerDied","Data":"44c092c40435be8f797756c18806df5ca0c4e5cc1e1950fcb32e1ffe3d4d613a"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.712702 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"fb2531dab1533843133b7bb9cd0e4ba1e47955b823ea46ac915a418df0a28439"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.719952 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvw6r_99795294-4844-44e8-b55b-998323bd4f6e/ovn-acl-logging/0.log" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.720411 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qvw6r_99795294-4844-44e8-b55b-998323bd4f6e/ovn-controller/0.log" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.720743 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" event={"ID":"99795294-4844-44e8-b55b-998323bd4f6e","Type":"ContainerDied","Data":"d8549a9c58ad3977289558b5d259040ba8382a65f326eae10975f7b4a2222951"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.720790 5106 scope.go:117] "RemoveContainer" containerID="0c085f3e5a57eee1a558eb14c8d707dd271557ce447c84bbcd4949881723922b" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.720997 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qvw6r" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.725801 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" event={"ID":"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe","Type":"ContainerStarted","Data":"a3473eaad98a475b5714c532348e33c775c8d97548330f3ed084026f3b64cef7"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.725897 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" event={"ID":"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe","Type":"ContainerStarted","Data":"ee0a36c30946c556c05bcc9e4014a16cb245050381a9ece4d6f3d85a28b69578"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.725941 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" event={"ID":"eb2af2d0-7e83-4ec5-87ba-15c1b6fe82fe","Type":"ContainerStarted","Data":"7d1ea4cc77cf536cb640f2c833e9f5e7d10f3cdcdb1381fd495dc06c14414fbd"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.728442 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xtksh_9da3e0a0-f6ab-4f57-925e-c59772b3d6d9/kube-multus/0.log" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.728682 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xtksh" event={"ID":"9da3e0a0-f6ab-4f57-925e-c59772b3d6d9","Type":"ContainerStarted","Data":"43bcd00b131b9d3737bc1c67be7b5a919133d6778f80737caa839e1f829c587a"} Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.741305 5106 scope.go:117] "RemoveContainer" containerID="6b599922621a6eb6574265e93c8a15394ed66d47eb7416a4c360858244f15c11" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.774400 5106 scope.go:117] "RemoveContainer" containerID="fece05bf4471d253fc963704a8c67e46be2138f175022e70538c8b6b2a055eab" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.786354 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-f9vgq" podStartSLOduration=2.786333768 podStartE2EDuration="2.786333768s" podCreationTimestamp="2026-03-20 00:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:20:11.764224788 +0000 UTC m=+666.197958872" watchObservedRunningTime="2026-03-20 00:20:11.786333768 +0000 UTC m=+666.220067822" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.803660 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvw6r"] Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.809193 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qvw6r"] Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.810843 5106 scope.go:117] "RemoveContainer" containerID="e60c16dea81b002da38f6e74a1183aae0d68d5ec2c0f76342944bc4a73fdae4c" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.841554 5106 scope.go:117] "RemoveContainer" containerID="2ddd9af58aa57b0d38a952b32ea235cc71190518291c29253037899f6abe3436" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.857463 5106 scope.go:117] "RemoveContainer" containerID="05fffb60827beb7046e691cc7177ed8b7993dd8d1fd1d950c15861a7134a589f" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.874811 5106 scope.go:117] "RemoveContainer" containerID="6c59b9743060c37ccc6998ad273851bf70a36a19866d8a37f385a982d31a58df" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.900182 5106 scope.go:117] "RemoveContainer" containerID="2b518e312797761600d953f4d2468ed5a689003063f65aac80dfc2d4e3197641" Mar 20 00:20:11 crc kubenswrapper[5106]: I0320 00:20:11.915124 5106 scope.go:117] "RemoveContainer" containerID="88cef2ceffaeee6e17dfca6bb04e79772f219727137cc936fc0c5bb31b5dd5e1" Mar 20 00:20:12 crc kubenswrapper[5106]: I0320 00:20:12.745015 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"106f775692af9b275c8eae5444b3c5efda833e00242f1aea70fe4b4579cdc547"} Mar 20 00:20:12 crc kubenswrapper[5106]: I0320 00:20:12.745377 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"0f1d53c9ea02a4fe69c9a30efaf2d626084ce499e9ebf366e78284e06cb840ac"} Mar 20 00:20:12 crc kubenswrapper[5106]: I0320 00:20:12.745390 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"77ca0fb1e3984a9ecd3a06a0dd4398e6f95a43663cc60d0cf4f8c0b1c21efe83"} Mar 20 00:20:12 crc kubenswrapper[5106]: I0320 00:20:12.745398 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"92435f7943450e1fd9bb6cb4e62ce81593579e4c32efab58c2978731ecd9397d"} Mar 20 00:20:12 crc kubenswrapper[5106]: I0320 00:20:12.745410 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"b365745a832cf315762784321fff3682d4e9a2962cff29081192a9061968d86a"} Mar 20 00:20:12 crc kubenswrapper[5106]: I0320 00:20:12.745423 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"ab180ca59157856623632b9b87f333d2c560cd04561960312dc6ab9d2e77a6e9"} Mar 20 00:20:13 crc kubenswrapper[5106]: I0320 00:20:13.175096 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99795294-4844-44e8-b55b-998323bd4f6e" path="/var/lib/kubelet/pods/99795294-4844-44e8-b55b-998323bd4f6e/volumes" Mar 20 00:20:15 crc kubenswrapper[5106]: I0320 00:20:15.773633 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"4ce8c629993d94d161856c072dfdedf8ca09c2437f3e11501c67e85edff3ffe4"} Mar 20 00:20:17 crc kubenswrapper[5106]: I0320 00:20:17.790106 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" event={"ID":"2118d519-1020-40a2-b32b-80f66d83c815","Type":"ContainerStarted","Data":"f7cb59b5341d225a6905364ca9ed38f69694265afed27890cc7a4528be7f0048"} Mar 20 00:20:17 crc kubenswrapper[5106]: I0320 00:20:17.790531 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:17 crc kubenswrapper[5106]: I0320 00:20:17.790550 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:17 crc kubenswrapper[5106]: I0320 00:20:17.826989 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:17 crc kubenswrapper[5106]: I0320 00:20:17.856618 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" podStartSLOduration=7.856592374 podStartE2EDuration="7.856592374s" podCreationTimestamp="2026-03-20 00:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:20:17.827344586 +0000 UTC m=+672.261078630" watchObservedRunningTime="2026-03-20 00:20:17.856592374 +0000 UTC m=+672.290326448" Mar 20 00:20:18 crc kubenswrapper[5106]: I0320 00:20:18.797682 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:18 crc kubenswrapper[5106]: I0320 00:20:18.829229 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:20:50 crc kubenswrapper[5106]: I0320 00:20:50.838284 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bnsjq" Mar 20 00:21:15 crc kubenswrapper[5106]: I0320 00:21:15.650122 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjqw"] Mar 20 00:21:15 crc kubenswrapper[5106]: I0320 00:21:15.650963 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjjqw" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="registry-server" containerID="cri-o://9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce" gracePeriod=30 Mar 20 00:21:15 crc kubenswrapper[5106]: I0320 00:21:15.990918 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.171864 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-catalog-content\") pod \"e7305fef-edd6-41c7-8db6-33177da2a53c\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.171924 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwjx\" (UniqueName: \"kubernetes.io/projected/e7305fef-edd6-41c7-8db6-33177da2a53c-kube-api-access-2hwjx\") pod \"e7305fef-edd6-41c7-8db6-33177da2a53c\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.171965 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-utilities\") pod \"e7305fef-edd6-41c7-8db6-33177da2a53c\" (UID: \"e7305fef-edd6-41c7-8db6-33177da2a53c\") " Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.173285 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-utilities" (OuterVolumeSpecName: "utilities") pod "e7305fef-edd6-41c7-8db6-33177da2a53c" (UID: "e7305fef-edd6-41c7-8db6-33177da2a53c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.178318 5106 generic.go:358] "Generic (PLEG): container finished" podID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerID="9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce" exitCode=0 Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.178426 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerDied","Data":"9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce"} Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.178461 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjjqw" event={"ID":"e7305fef-edd6-41c7-8db6-33177da2a53c","Type":"ContainerDied","Data":"a095fb49543fea9a70188b968a0c9ecc48363754e53bf6f408d64f402f0f2981"} Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.178480 5106 scope.go:117] "RemoveContainer" containerID="9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.178657 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjjqw" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.185772 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7305fef-edd6-41c7-8db6-33177da2a53c-kube-api-access-2hwjx" (OuterVolumeSpecName: "kube-api-access-2hwjx") pod "e7305fef-edd6-41c7-8db6-33177da2a53c" (UID: "e7305fef-edd6-41c7-8db6-33177da2a53c"). InnerVolumeSpecName "kube-api-access-2hwjx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.200644 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7305fef-edd6-41c7-8db6-33177da2a53c" (UID: "e7305fef-edd6-41c7-8db6-33177da2a53c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.202660 5106 scope.go:117] "RemoveContainer" containerID="580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.217310 5106 scope.go:117] "RemoveContainer" containerID="499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.231286 5106 scope.go:117] "RemoveContainer" containerID="9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce" Mar 20 00:21:16 crc kubenswrapper[5106]: E0320 00:21:16.231667 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce\": container with ID starting with 9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce not found: ID does not exist" containerID="9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.231699 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce"} err="failed to get container status \"9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce\": rpc error: code = NotFound desc = could not find container \"9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce\": container with ID starting with 9f2514bcdc837aaac3a1cda449cda686c3ac2f006de2c34d9d8f0ef0e91c8fce not found: ID does not exist" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.231724 5106 scope.go:117] "RemoveContainer" containerID="580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e" Mar 20 00:21:16 crc kubenswrapper[5106]: E0320 00:21:16.232020 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e\": container with ID starting with 580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e not found: ID does not exist" containerID="580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.232040 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e"} err="failed to get container status \"580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e\": rpc error: code = NotFound desc = could not find container \"580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e\": container with ID starting with 580a016d05904f51e975c4d8b881ed9de9258cc78df49b28380bce7e8ea63a1e not found: ID does not exist" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.232055 5106 scope.go:117] "RemoveContainer" containerID="499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf" Mar 20 00:21:16 crc kubenswrapper[5106]: E0320 00:21:16.232308 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf\": container with ID starting with 499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf not found: ID does not exist" containerID="499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.232364 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf"} err="failed to get container status \"499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf\": rpc error: code = NotFound desc = could not find container \"499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf\": container with ID starting with 499814fe1e5d65c935070795567278a14125e3062681aa9b264459014c783ddf not found: ID does not exist" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.274731 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.274768 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2hwjx\" (UniqueName: \"kubernetes.io/projected/e7305fef-edd6-41c7-8db6-33177da2a53c-kube-api-access-2hwjx\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.274785 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7305fef-edd6-41c7-8db6-33177da2a53c-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.513065 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjqw"] Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.516941 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjjqw"] Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.841889 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rspp8"] Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844109 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="extract-content" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844140 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="extract-content" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844166 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="registry-server" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844174 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="registry-server" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844267 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="extract-utilities" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844280 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="extract-utilities" Mar 20 00:21:16 crc kubenswrapper[5106]: I0320 00:21:16.844489 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" containerName="registry-server" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.140203 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rspp8"] Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.140404 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.169332 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7305fef-edd6-41c7-8db6-33177da2a53c" path="/var/lib/kubelet/pods/e7305fef-edd6-41c7-8db6-33177da2a53c/volumes" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.285778 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.285876 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.285927 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.285973 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-registry-certificates\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.286094 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-742bn\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-kube-api-access-742bn\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.286253 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-registry-tls\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.286303 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-trusted-ca\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.286331 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.310898 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.387971 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-742bn\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-kube-api-access-742bn\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.388022 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-registry-tls\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.388048 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-trusted-ca\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.388066 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.388100 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.388161 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.388196 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-registry-certificates\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.389148 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.391344 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-trusted-ca\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.392728 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-registry-certificates\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.401536 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.401615 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-registry-tls\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.404759 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.408437 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-742bn\" (UniqueName: \"kubernetes.io/projected/5973f25a-75e9-4ae6-be71-ab3c05b9f63d-kube-api-access-742bn\") pod \"image-registry-5d9d95bf5b-rspp8\" (UID: \"5973f25a-75e9-4ae6-be71-ab3c05b9f63d\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.462300 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:17 crc kubenswrapper[5106]: I0320 00:21:17.655091 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rspp8"] Mar 20 00:21:18 crc kubenswrapper[5106]: I0320 00:21:18.190151 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" event={"ID":"5973f25a-75e9-4ae6-be71-ab3c05b9f63d","Type":"ContainerStarted","Data":"84e549b8d967e41310487b33815c3a53cc36064fc4c265e3fbd2b7d0d9fa3d57"} Mar 20 00:21:18 crc kubenswrapper[5106]: I0320 00:21:18.191063 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" event={"ID":"5973f25a-75e9-4ae6-be71-ab3c05b9f63d","Type":"ContainerStarted","Data":"ae045c823c82295ad32dbbb60bd8165160eef84c6c2e1a434b449512a9ad05be"} Mar 20 00:21:18 crc kubenswrapper[5106]: I0320 00:21:18.191159 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:18 crc kubenswrapper[5106]: I0320 00:21:18.208458 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" podStartSLOduration=2.208436286 podStartE2EDuration="2.208436286s" podCreationTimestamp="2026-03-20 00:21:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:21:18.206013729 +0000 UTC m=+732.639747803" watchObservedRunningTime="2026-03-20 00:21:18.208436286 +0000 UTC m=+732.642170340" Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.724714 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk"] Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.815283 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk"] Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.815442 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.817671 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.920119 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.920256 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:19 crc kubenswrapper[5106]: I0320 00:21:19.920284 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqnc8\" (UniqueName: \"kubernetes.io/projected/2607c5c5-17d2-449d-a4e2-679a43300ccb-kube-api-access-tqnc8\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.021790 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.021855 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tqnc8\" (UniqueName: \"kubernetes.io/projected/2607c5c5-17d2-449d-a4e2-679a43300ccb-kube-api-access-tqnc8\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.021909 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.022277 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.022376 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.043840 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqnc8\" (UniqueName: \"kubernetes.io/projected/2607c5c5-17d2-449d-a4e2-679a43300ccb-kube-api-access-tqnc8\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.149157 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:20 crc kubenswrapper[5106]: I0320 00:21:20.581872 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk"] Mar 20 00:21:20 crc kubenswrapper[5106]: W0320 00:21:20.589055 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2607c5c5_17d2_449d_a4e2_679a43300ccb.slice/crio-9862006cb0476f22cea0be0896ca4dca2e83f8fe28548916eb8bee52fb3015d1 WatchSource:0}: Error finding container 9862006cb0476f22cea0be0896ca4dca2e83f8fe28548916eb8bee52fb3015d1: Status 404 returned error can't find the container with id 9862006cb0476f22cea0be0896ca4dca2e83f8fe28548916eb8bee52fb3015d1 Mar 20 00:21:21 crc kubenswrapper[5106]: I0320 00:21:21.210551 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" event={"ID":"2607c5c5-17d2-449d-a4e2-679a43300ccb","Type":"ContainerStarted","Data":"1844561627fb28268e36af637e2c131e0c773a0793f9ba6cd609d0d66691493b"} Mar 20 00:21:21 crc kubenswrapper[5106]: I0320 00:21:21.210874 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" event={"ID":"2607c5c5-17d2-449d-a4e2-679a43300ccb","Type":"ContainerStarted","Data":"9862006cb0476f22cea0be0896ca4dca2e83f8fe28548916eb8bee52fb3015d1"} Mar 20 00:21:22 crc kubenswrapper[5106]: I0320 00:21:22.216944 5106 generic.go:358] "Generic (PLEG): container finished" podID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerID="1844561627fb28268e36af637e2c131e0c773a0793f9ba6cd609d0d66691493b" exitCode=0 Mar 20 00:21:22 crc kubenswrapper[5106]: I0320 00:21:22.216996 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" event={"ID":"2607c5c5-17d2-449d-a4e2-679a43300ccb","Type":"ContainerDied","Data":"1844561627fb28268e36af637e2c131e0c773a0793f9ba6cd609d0d66691493b"} Mar 20 00:21:25 crc kubenswrapper[5106]: I0320 00:21:25.236634 5106 generic.go:358] "Generic (PLEG): container finished" podID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerID="fb5cfd0896df7934f09c9d10bf531998e550a7823fc2bf110dc1489966c90283" exitCode=0 Mar 20 00:21:25 crc kubenswrapper[5106]: I0320 00:21:25.236672 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" event={"ID":"2607c5c5-17d2-449d-a4e2-679a43300ccb","Type":"ContainerDied","Data":"fb5cfd0896df7934f09c9d10bf531998e550a7823fc2bf110dc1489966c90283"} Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.245367 5106 generic.go:358] "Generic (PLEG): container finished" podID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerID="7b4cbdf303f890a2e2fcfc71ef2b7df612b967be56e6e23a16bf547e1098a920" exitCode=0 Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.245417 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" event={"ID":"2607c5c5-17d2-449d-a4e2-679a43300ccb","Type":"ContainerDied","Data":"7b4cbdf303f890a2e2fcfc71ef2b7df612b967be56e6e23a16bf547e1098a920"} Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.495212 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w"] Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.617218 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w"] Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.617422 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.714991 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-util\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.715073 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxt6\" (UniqueName: \"kubernetes.io/projected/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-kube-api-access-vbxt6\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.715105 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-bundle\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.816371 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-util\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.816713 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxt6\" (UniqueName: \"kubernetes.io/projected/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-kube-api-access-vbxt6\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.816859 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-bundle\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.816957 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-util\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.817170 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-bundle\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.844362 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxt6\" (UniqueName: \"kubernetes.io/projected/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-kube-api-access-vbxt6\") pod \"7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:26 crc kubenswrapper[5106]: I0320 00:21:26.945704 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.121835 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w"] Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.255986 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" event={"ID":"3184a606-fbc0-4b98-bab9-3050d0f2a6fc","Type":"ContainerStarted","Data":"46ef89ef767917c2fbf5b1b36c5824d5045689ed2edcd202bcacd3e9614bebfb"} Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.256304 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" event={"ID":"3184a606-fbc0-4b98-bab9-3050d0f2a6fc","Type":"ContainerStarted","Data":"0bd3573715718e02243571ba1baf5b846572e53d87c9b942d0f7fa034f4a7d4a"} Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.513568 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.525102 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-bundle\") pod \"2607c5c5-17d2-449d-a4e2-679a43300ccb\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.525245 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-util\") pod \"2607c5c5-17d2-449d-a4e2-679a43300ccb\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.525354 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqnc8\" (UniqueName: \"kubernetes.io/projected/2607c5c5-17d2-449d-a4e2-679a43300ccb-kube-api-access-tqnc8\") pod \"2607c5c5-17d2-449d-a4e2-679a43300ccb\" (UID: \"2607c5c5-17d2-449d-a4e2-679a43300ccb\") " Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.528298 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-bundle" (OuterVolumeSpecName: "bundle") pod "2607c5c5-17d2-449d-a4e2-679a43300ccb" (UID: "2607c5c5-17d2-449d-a4e2-679a43300ccb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.534262 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2607c5c5-17d2-449d-a4e2-679a43300ccb-kube-api-access-tqnc8" (OuterVolumeSpecName: "kube-api-access-tqnc8") pod "2607c5c5-17d2-449d-a4e2-679a43300ccb" (UID: "2607c5c5-17d2-449d-a4e2-679a43300ccb"). InnerVolumeSpecName "kube-api-access-tqnc8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.535769 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-util" (OuterVolumeSpecName: "util") pod "2607c5c5-17d2-449d-a4e2-679a43300ccb" (UID: "2607c5c5-17d2-449d-a4e2-679a43300ccb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.627403 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tqnc8\" (UniqueName: \"kubernetes.io/projected/2607c5c5-17d2-449d-a4e2-679a43300ccb-kube-api-access-tqnc8\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.627452 5106 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:27 crc kubenswrapper[5106]: I0320 00:21:27.627466 5106 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2607c5c5-17d2-449d-a4e2-679a43300ccb-util\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:28 crc kubenswrapper[5106]: I0320 00:21:28.264305 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" event={"ID":"2607c5c5-17d2-449d-a4e2-679a43300ccb","Type":"ContainerDied","Data":"9862006cb0476f22cea0be0896ca4dca2e83f8fe28548916eb8bee52fb3015d1"} Mar 20 00:21:28 crc kubenswrapper[5106]: I0320 00:21:28.264346 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9862006cb0476f22cea0be0896ca4dca2e83f8fe28548916eb8bee52fb3015d1" Mar 20 00:21:28 crc kubenswrapper[5106]: I0320 00:21:28.264388 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk" Mar 20 00:21:28 crc kubenswrapper[5106]: I0320 00:21:28.265847 5106 generic.go:358] "Generic (PLEG): container finished" podID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerID="46ef89ef767917c2fbf5b1b36c5824d5045689ed2edcd202bcacd3e9614bebfb" exitCode=0 Mar 20 00:21:28 crc kubenswrapper[5106]: I0320 00:21:28.265956 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" event={"ID":"3184a606-fbc0-4b98-bab9-3050d0f2a6fc","Type":"ContainerDied","Data":"46ef89ef767917c2fbf5b1b36c5824d5045689ed2edcd202bcacd3e9614bebfb"} Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.280838 5106 generic.go:358] "Generic (PLEG): container finished" podID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerID="de43c9201ddcb4de9b2a9b4bdceea780328e643af24e0785a4609f9d3e260d86" exitCode=0 Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.280938 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" event={"ID":"3184a606-fbc0-4b98-bab9-3050d0f2a6fc","Type":"ContainerDied","Data":"de43c9201ddcb4de9b2a9b4bdceea780328e643af24e0785a4609f9d3e260d86"} Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.900854 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc"] Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901650 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="util" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901676 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="util" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901696 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="extract" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901705 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="extract" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901737 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="pull" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901744 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="pull" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.901846 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="2607c5c5-17d2-449d-a4e2-679a43300ccb" containerName="extract" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.911425 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.914954 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc"] Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.975746 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwmds\" (UniqueName: \"kubernetes.io/projected/2f3907be-addc-4039-afab-aea79099b9f2-kube-api-access-fwmds\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.975839 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:30 crc kubenswrapper[5106]: I0320 00:21:30.975874 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.056994 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhkc7"] Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.077621 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.077668 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.077787 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fwmds\" (UniqueName: \"kubernetes.io/projected/2f3907be-addc-4039-afab-aea79099b9f2-kube-api-access-fwmds\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.078381 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.078402 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.085529 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhkc7"] Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.085743 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.126546 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwmds\" (UniqueName: \"kubernetes.io/projected/2f3907be-addc-4039-afab-aea79099b9f2-kube-api-access-fwmds\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.178992 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-utilities\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.179071 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4vt\" (UniqueName: \"kubernetes.io/projected/9b2de389-39e2-4ebb-b208-248ff53c060b-kube-api-access-ms4vt\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.179151 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-catalog-content\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.226626 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.281101 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-utilities\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.281382 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4vt\" (UniqueName: \"kubernetes.io/projected/9b2de389-39e2-4ebb-b208-248ff53c060b-kube-api-access-ms4vt\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.281423 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-catalog-content\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.281757 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-utilities\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.281935 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-catalog-content\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.291074 5106 generic.go:358] "Generic (PLEG): container finished" podID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerID="ee5999f3975036366a4919a2c7fe260894875c5f047c210ec578689ad4e1aa44" exitCode=0 Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.291189 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" event={"ID":"3184a606-fbc0-4b98-bab9-3050d0f2a6fc","Type":"ContainerDied","Data":"ee5999f3975036366a4919a2c7fe260894875c5f047c210ec578689ad4e1aa44"} Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.315376 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4vt\" (UniqueName: \"kubernetes.io/projected/9b2de389-39e2-4ebb-b208-248ff53c060b-kube-api-access-ms4vt\") pod \"certified-operators-rhkc7\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.400175 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.511774 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc"] Mar 20 00:21:31 crc kubenswrapper[5106]: I0320 00:21:31.925136 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhkc7"] Mar 20 00:21:31 crc kubenswrapper[5106]: W0320 00:21:31.939007 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b2de389_39e2_4ebb_b208_248ff53c060b.slice/crio-b2fe200062dfee25b36bf07eea57055d9e3942ea3f49e7cd04619274dce73a95 WatchSource:0}: Error finding container b2fe200062dfee25b36bf07eea57055d9e3942ea3f49e7cd04619274dce73a95: Status 404 returned error can't find the container with id b2fe200062dfee25b36bf07eea57055d9e3942ea3f49e7cd04619274dce73a95 Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.296954 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerStarted","Data":"b2fe200062dfee25b36bf07eea57055d9e3942ea3f49e7cd04619274dce73a95"} Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.297881 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" event={"ID":"2f3907be-addc-4039-afab-aea79099b9f2","Type":"ContainerStarted","Data":"fc7f47c05a0b9dba8e9a7860083ef6662486e50b084a74afb3ef8a23284e94b3"} Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.642490 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.718603 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-bundle\") pod \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.718873 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbxt6\" (UniqueName: \"kubernetes.io/projected/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-kube-api-access-vbxt6\") pod \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.718993 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-util\") pod \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\" (UID: \"3184a606-fbc0-4b98-bab9-3050d0f2a6fc\") " Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.734537 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-bundle" (OuterVolumeSpecName: "bundle") pod "3184a606-fbc0-4b98-bab9-3050d0f2a6fc" (UID: "3184a606-fbc0-4b98-bab9-3050d0f2a6fc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.744292 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-kube-api-access-vbxt6" (OuterVolumeSpecName: "kube-api-access-vbxt6") pod "3184a606-fbc0-4b98-bab9-3050d0f2a6fc" (UID: "3184a606-fbc0-4b98-bab9-3050d0f2a6fc"). InnerVolumeSpecName "kube-api-access-vbxt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.821855 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbxt6\" (UniqueName: \"kubernetes.io/projected/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-kube-api-access-vbxt6\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:32 crc kubenswrapper[5106]: I0320 00:21:32.821895 5106 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.057401 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-util" (OuterVolumeSpecName: "util") pod "3184a606-fbc0-4b98-bab9-3050d0f2a6fc" (UID: "3184a606-fbc0-4b98-bab9-3050d0f2a6fc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.126079 5106 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3184a606-fbc0-4b98-bab9-3050d0f2a6fc-util\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.313924 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" event={"ID":"3184a606-fbc0-4b98-bab9-3050d0f2a6fc","Type":"ContainerDied","Data":"0bd3573715718e02243571ba1baf5b846572e53d87c9b942d0f7fa034f4a7d4a"} Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.313969 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd3573715718e02243571ba1baf5b846572e53d87c9b942d0f7fa034f4a7d4a" Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.313967 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w" Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.316290 5106 generic.go:358] "Generic (PLEG): container finished" podID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerID="928f5d9da567258fb7f444c6c5ae4305e98a2a1078d6b72faa954437f00bce41" exitCode=0 Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.316356 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerDied","Data":"928f5d9da567258fb7f444c6c5ae4305e98a2a1078d6b72faa954437f00bce41"} Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.318891 5106 generic.go:358] "Generic (PLEG): container finished" podID="2f3907be-addc-4039-afab-aea79099b9f2" containerID="1cd8637a53f4f1f392ebfa587b36b66eda0c26913f4244cc86ec9ca05238ce46" exitCode=0 Mar 20 00:21:33 crc kubenswrapper[5106]: I0320 00:21:33.318937 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" event={"ID":"2f3907be-addc-4039-afab-aea79099b9f2","Type":"ContainerDied","Data":"1cd8637a53f4f1f392ebfa587b36b66eda0c26913f4244cc86ec9ca05238ce46"} Mar 20 00:21:34 crc kubenswrapper[5106]: I0320 00:21:34.342078 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerStarted","Data":"46c81d624731c8707be77e08920139a5eca61083ebf2722bf7cc7a2446c849cf"} Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487079 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mz6xv"] Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487940 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="util" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487954 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="util" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487966 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="pull" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487972 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="pull" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487989 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="extract" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.487995 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="extract" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.488103 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="3184a606-fbc0-4b98-bab9-3050d0f2a6fc" containerName="extract" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.590874 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mz6xv"] Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.591026 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.659633 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-catalog-content\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.659682 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4wp\" (UniqueName: \"kubernetes.io/projected/f91136e0-e583-43e3-9c30-919d2f33efa6-kube-api-access-nf4wp\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.659722 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-utilities\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.761307 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-catalog-content\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.761364 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nf4wp\" (UniqueName: \"kubernetes.io/projected/f91136e0-e583-43e3-9c30-919d2f33efa6-kube-api-access-nf4wp\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.761414 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-utilities\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.761985 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-catalog-content\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.762053 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-utilities\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.786593 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf4wp\" (UniqueName: \"kubernetes.io/projected/f91136e0-e583-43e3-9c30-919d2f33efa6-kube-api-access-nf4wp\") pod \"redhat-operators-mz6xv\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:35 crc kubenswrapper[5106]: I0320 00:21:35.911978 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:36 crc kubenswrapper[5106]: I0320 00:21:36.360896 5106 generic.go:358] "Generic (PLEG): container finished" podID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerID="46c81d624731c8707be77e08920139a5eca61083ebf2722bf7cc7a2446c849cf" exitCode=0 Mar 20 00:21:36 crc kubenswrapper[5106]: I0320 00:21:36.361099 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerDied","Data":"46c81d624731c8707be77e08920139a5eca61083ebf2722bf7cc7a2446c849cf"} Mar 20 00:21:36 crc kubenswrapper[5106]: I0320 00:21:36.404176 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mz6xv"] Mar 20 00:21:36 crc kubenswrapper[5106]: W0320 00:21:36.428806 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf91136e0_e583_43e3_9c30_919d2f33efa6.slice/crio-86f73dbaa2c6d76aa1412679ca65222cb1e2d51b1bca96091d797b0098a67446 WatchSource:0}: Error finding container 86f73dbaa2c6d76aa1412679ca65222cb1e2d51b1bca96091d797b0098a67446: Status 404 returned error can't find the container with id 86f73dbaa2c6d76aa1412679ca65222cb1e2d51b1bca96091d797b0098a67446 Mar 20 00:21:37 crc kubenswrapper[5106]: I0320 00:21:37.372243 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerStarted","Data":"86f73dbaa2c6d76aa1412679ca65222cb1e2d51b1bca96091d797b0098a67446"} Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.210569 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rspp8" Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.297498 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jhgps"] Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.408460 5106 generic.go:358] "Generic (PLEG): container finished" podID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerID="4b144b60f4d952e9a8e4fa53cff5590ca5721682122419df4d1effa84c814544" exitCode=0 Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.408679 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerDied","Data":"4b144b60f4d952e9a8e4fa53cff5590ca5721682122419df4d1effa84c814544"} Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.421414 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerStarted","Data":"b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284"} Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.424320 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" event={"ID":"2f3907be-addc-4039-afab-aea79099b9f2","Type":"ContainerStarted","Data":"9a50aae8fc89c2caa132e81dfdd694fa2ef4d1b025ef4068c040ad96fc01e1c9"} Mar 20 00:21:40 crc kubenswrapper[5106]: I0320 00:21:40.473045 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhkc7" podStartSLOduration=8.794176605 podStartE2EDuration="9.473024114s" podCreationTimestamp="2026-03-20 00:21:31 +0000 UTC" firstStartedPulling="2026-03-20 00:21:33.317177936 +0000 UTC m=+747.750911990" lastFinishedPulling="2026-03-20 00:21:33.996025445 +0000 UTC m=+748.429759499" observedRunningTime="2026-03-20 00:21:40.4694336 +0000 UTC m=+754.903167654" watchObservedRunningTime="2026-03-20 00:21:40.473024114 +0000 UTC m=+754.906758168" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.402010 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.402407 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.446910 5106 generic.go:358] "Generic (PLEG): container finished" podID="2f3907be-addc-4039-afab-aea79099b9f2" containerID="9a50aae8fc89c2caa132e81dfdd694fa2ef4d1b025ef4068c040ad96fc01e1c9" exitCode=0 Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.448085 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" event={"ID":"2f3907be-addc-4039-afab-aea79099b9f2","Type":"ContainerDied","Data":"9a50aae8fc89c2caa132e81dfdd694fa2ef4d1b025ef4068c040ad96fc01e1c9"} Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.627232 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-f76db68c9-j9h6m"] Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.650633 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-f76db68c9-j9h6m"] Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.650792 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.652953 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.653490 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-8w4bz\"" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.654979 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.655250 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.797785 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8bvx\" (UniqueName: \"kubernetes.io/projected/a9535b92-2a0c-45e6-939e-878d22fec64e-kube-api-access-l8bvx\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.797872 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9535b92-2a0c-45e6-939e-878d22fec64e-webhook-cert\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.797906 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9535b92-2a0c-45e6-939e-878d22fec64e-apiservice-cert\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.899562 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l8bvx\" (UniqueName: \"kubernetes.io/projected/a9535b92-2a0c-45e6-939e-878d22fec64e-kube-api-access-l8bvx\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.899920 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9535b92-2a0c-45e6-939e-878d22fec64e-webhook-cert\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.900052 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9535b92-2a0c-45e6-939e-878d22fec64e-apiservice-cert\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.907445 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9535b92-2a0c-45e6-939e-878d22fec64e-webhook-cert\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.907498 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a9535b92-2a0c-45e6-939e-878d22fec64e-apiservice-cert\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.919374 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8bvx\" (UniqueName: \"kubernetes.io/projected/a9535b92-2a0c-45e6-939e-878d22fec64e-kube-api-access-l8bvx\") pod \"elastic-operator-f76db68c9-j9h6m\" (UID: \"a9535b92-2a0c-45e6-939e-878d22fec64e\") " pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:41 crc kubenswrapper[5106]: I0320 00:21:41.985140 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" Mar 20 00:21:42 crc kubenswrapper[5106]: I0320 00:21:42.432601 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-f76db68c9-j9h6m"] Mar 20 00:21:42 crc kubenswrapper[5106]: W0320 00:21:42.441285 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9535b92_2a0c_45e6_939e_878d22fec64e.slice/crio-b98608f4d41cad222aa5f1c3a84f9380799df073abdf668adc00eec64e9367e9 WatchSource:0}: Error finding container b98608f4d41cad222aa5f1c3a84f9380799df073abdf668adc00eec64e9367e9: Status 404 returned error can't find the container with id b98608f4d41cad222aa5f1c3a84f9380799df073abdf668adc00eec64e9367e9 Mar 20 00:21:42 crc kubenswrapper[5106]: I0320 00:21:42.456802 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerStarted","Data":"1726ab1b73c7e79577f5f2b9dd7c93d98f2d303295132958ca041cc188b3e924"} Mar 20 00:21:42 crc kubenswrapper[5106]: I0320 00:21:42.459782 5106 generic.go:358] "Generic (PLEG): container finished" podID="2f3907be-addc-4039-afab-aea79099b9f2" containerID="73ae783476ae5bb4a60649952903bb69db479ff3e24bcda4badea08e1f65435e" exitCode=0 Mar 20 00:21:42 crc kubenswrapper[5106]: I0320 00:21:42.459890 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" event={"ID":"2f3907be-addc-4039-afab-aea79099b9f2","Type":"ContainerDied","Data":"73ae783476ae5bb4a60649952903bb69db479ff3e24bcda4badea08e1f65435e"} Mar 20 00:21:42 crc kubenswrapper[5106]: I0320 00:21:42.461323 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" event={"ID":"a9535b92-2a0c-45e6-939e-878d22fec64e","Type":"ContainerStarted","Data":"b98608f4d41cad222aa5f1c3a84f9380799df073abdf668adc00eec64e9367e9"} Mar 20 00:21:42 crc kubenswrapper[5106]: I0320 00:21:42.486003 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rhkc7" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="registry-server" probeResult="failure" output=< Mar 20 00:21:42 crc kubenswrapper[5106]: timeout: failed to connect service ":50051" within 1s Mar 20 00:21:42 crc kubenswrapper[5106]: > Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.197927 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx"] Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.302901 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx"] Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.303247 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.307738 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.309244 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.312549 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-b5m2v\"" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.424616 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p92tq\" (UniqueName: \"kubernetes.io/projected/a5a00efc-0f62-4415-97e0-e0bfd2f1276a-kube-api-access-p92tq\") pod \"obo-prometheus-operator-55568fc96c-xxrkx\" (UID: \"a5a00efc-0f62-4415-97e0-e0bfd2f1276a\") " pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.525868 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p92tq\" (UniqueName: \"kubernetes.io/projected/a5a00efc-0f62-4415-97e0-e0bfd2f1276a-kube-api-access-p92tq\") pod \"obo-prometheus-operator-55568fc96c-xxrkx\" (UID: \"a5a00efc-0f62-4415-97e0-e0bfd2f1276a\") " pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.551563 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p92tq\" (UniqueName: \"kubernetes.io/projected/a5a00efc-0f62-4415-97e0-e0bfd2f1276a-kube-api-access-p92tq\") pod \"obo-prometheus-operator-55568fc96c-xxrkx\" (UID: \"a5a00efc-0f62-4415-97e0-e0bfd2f1276a\") " pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.621965 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.844772 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28"] Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.853994 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.858790 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28"] Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.861071 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.861215 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-gtpqx\"" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.867122 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95"] Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.875290 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.908754 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95"] Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.937930 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2252d2cb-9b32-4137-8368-8b6c9bf4a267-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-bbc28\" (UID: \"2252d2cb-9b32-4137-8368-8b6c9bf4a267\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:43 crc kubenswrapper[5106]: I0320 00:21:43.938015 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2252d2cb-9b32-4137-8368-8b6c9bf4a267-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-bbc28\" (UID: \"2252d2cb-9b32-4137-8368-8b6c9bf4a267\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.039334 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/343fff88-557a-4473-b878-7badd8470e8c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-p8g95\" (UID: \"343fff88-557a-4473-b878-7badd8470e8c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.039429 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/343fff88-557a-4473-b878-7badd8470e8c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-p8g95\" (UID: \"343fff88-557a-4473-b878-7badd8470e8c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.039512 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2252d2cb-9b32-4137-8368-8b6c9bf4a267-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-bbc28\" (UID: \"2252d2cb-9b32-4137-8368-8b6c9bf4a267\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.039626 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2252d2cb-9b32-4137-8368-8b6c9bf4a267-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-bbc28\" (UID: \"2252d2cb-9b32-4137-8368-8b6c9bf4a267\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.047270 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2252d2cb-9b32-4137-8368-8b6c9bf4a267-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-bbc28\" (UID: \"2252d2cb-9b32-4137-8368-8b6c9bf4a267\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.052277 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2252d2cb-9b32-4137-8368-8b6c9bf4a267-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-bbc28\" (UID: \"2252d2cb-9b32-4137-8368-8b6c9bf4a267\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.064657 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx"] Mar 20 00:21:44 crc kubenswrapper[5106]: W0320 00:21:44.099456 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5a00efc_0f62_4415_97e0_e0bfd2f1276a.slice/crio-ff93fc095a4122c2a5481fd15ffd454573c54d708002a601dd1648a3d661b042 WatchSource:0}: Error finding container ff93fc095a4122c2a5481fd15ffd454573c54d708002a601dd1648a3d661b042: Status 404 returned error can't find the container with id ff93fc095a4122c2a5481fd15ffd454573c54d708002a601dd1648a3d661b042 Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.102890 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.140867 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/343fff88-557a-4473-b878-7badd8470e8c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-p8g95\" (UID: \"343fff88-557a-4473-b878-7badd8470e8c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.140988 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/343fff88-557a-4473-b878-7badd8470e8c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-p8g95\" (UID: \"343fff88-557a-4473-b878-7badd8470e8c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.148083 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/343fff88-557a-4473-b878-7badd8470e8c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-p8g95\" (UID: \"343fff88-557a-4473-b878-7badd8470e8c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.153551 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/343fff88-557a-4473-b878-7badd8470e8c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-659dbf9598-p8g95\" (UID: \"343fff88-557a-4473-b878-7badd8470e8c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.182491 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.216497 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.240933 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-587f9c8867-zp5zg"] Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.245705 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-util\") pod \"2f3907be-addc-4039-afab-aea79099b9f2\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.245753 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-bundle\") pod \"2f3907be-addc-4039-afab-aea79099b9f2\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.245943 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwmds\" (UniqueName: \"kubernetes.io/projected/2f3907be-addc-4039-afab-aea79099b9f2-kube-api-access-fwmds\") pod \"2f3907be-addc-4039-afab-aea79099b9f2\" (UID: \"2f3907be-addc-4039-afab-aea79099b9f2\") " Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.248424 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-bundle" (OuterVolumeSpecName: "bundle") pod "2f3907be-addc-4039-afab-aea79099b9f2" (UID: "2f3907be-addc-4039-afab-aea79099b9f2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251293 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3907be-addc-4039-afab-aea79099b9f2-kube-api-access-fwmds" (OuterVolumeSpecName: "kube-api-access-fwmds") pod "2f3907be-addc-4039-afab-aea79099b9f2" (UID: "2f3907be-addc-4039-afab-aea79099b9f2"). InnerVolumeSpecName "kube-api-access-fwmds". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251728 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="util" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251841 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="util" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251851 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="extract" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251857 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="extract" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251871 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="pull" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.251877 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="pull" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.252006 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="2f3907be-addc-4039-afab-aea79099b9f2" containerName="extract" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.265844 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.267521 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-587f9c8867-zp5zg"] Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.271312 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-util" (OuterVolumeSpecName: "util") pod "2f3907be-addc-4039-afab-aea79099b9f2" (UID: "2f3907be-addc-4039-afab-aea79099b9f2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.273316 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.273431 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-8wm2l\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.348019 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/eca89ef6-a7ff-48b6-a250-3b10b73a40be-observability-operator-tls\") pod \"observability-operator-587f9c8867-zp5zg\" (UID: \"eca89ef6-a7ff-48b6-a250-3b10b73a40be\") " pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.348458 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm6nx\" (UniqueName: \"kubernetes.io/projected/eca89ef6-a7ff-48b6-a250-3b10b73a40be-kube-api-access-hm6nx\") pod \"observability-operator-587f9c8867-zp5zg\" (UID: \"eca89ef6-a7ff-48b6-a250-3b10b73a40be\") " pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.348554 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fwmds\" (UniqueName: \"kubernetes.io/projected/2f3907be-addc-4039-afab-aea79099b9f2-kube-api-access-fwmds\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.348569 5106 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-util\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.348596 5106 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f3907be-addc-4039-afab-aea79099b9f2-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.452539 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/eca89ef6-a7ff-48b6-a250-3b10b73a40be-observability-operator-tls\") pod \"observability-operator-587f9c8867-zp5zg\" (UID: \"eca89ef6-a7ff-48b6-a250-3b10b73a40be\") " pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.452598 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hm6nx\" (UniqueName: \"kubernetes.io/projected/eca89ef6-a7ff-48b6-a250-3b10b73a40be-kube-api-access-hm6nx\") pod \"observability-operator-587f9c8867-zp5zg\" (UID: \"eca89ef6-a7ff-48b6-a250-3b10b73a40be\") " pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.459730 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/eca89ef6-a7ff-48b6-a250-3b10b73a40be-observability-operator-tls\") pod \"observability-operator-587f9c8867-zp5zg\" (UID: \"eca89ef6-a7ff-48b6-a250-3b10b73a40be\") " pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.471839 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm6nx\" (UniqueName: \"kubernetes.io/projected/eca89ef6-a7ff-48b6-a250-3b10b73a40be-kube-api-access-hm6nx\") pod \"observability-operator-587f9c8867-zp5zg\" (UID: \"eca89ef6-a7ff-48b6-a250-3b10b73a40be\") " pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.515010 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" event={"ID":"2f3907be-addc-4039-afab-aea79099b9f2","Type":"ContainerDied","Data":"fc7f47c05a0b9dba8e9a7860083ef6662486e50b084a74afb3ef8a23284e94b3"} Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.515051 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc7f47c05a0b9dba8e9a7860083ef6662486e50b084a74afb3ef8a23284e94b3" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.515156 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.523460 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" event={"ID":"a5a00efc-0f62-4415-97e0-e0bfd2f1276a","Type":"ContainerStarted","Data":"ff93fc095a4122c2a5481fd15ffd454573c54d708002a601dd1648a3d661b042"} Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.527960 5106 generic.go:358] "Generic (PLEG): container finished" podID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerID="1726ab1b73c7e79577f5f2b9dd7c93d98f2d303295132958ca041cc188b3e924" exitCode=0 Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.528115 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerDied","Data":"1726ab1b73c7e79577f5f2b9dd7c93d98f2d303295132958ca041cc188b3e924"} Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.599255 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95"] Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.614243 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.731733 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28"] Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.813780 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-6b7c6d77c9-8v544"] Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.833744 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.849397 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6b7c6d77c9-8v544"] Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.850221 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-service-cert\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.852850 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-kgrwd\"" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.965474 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/78131e28-13b9-46ee-b506-d7d79f747263-openshift-service-ca\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.965547 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78131e28-13b9-46ee-b506-d7d79f747263-webhook-cert\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.965680 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78131e28-13b9-46ee-b506-d7d79f747263-apiservice-cert\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:44 crc kubenswrapper[5106]: I0320 00:21:44.965695 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg5r2\" (UniqueName: \"kubernetes.io/projected/78131e28-13b9-46ee-b506-d7d79f747263-kube-api-access-hg5r2\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.066730 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78131e28-13b9-46ee-b506-d7d79f747263-webhook-cert\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.067154 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78131e28-13b9-46ee-b506-d7d79f747263-apiservice-cert\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.067181 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hg5r2\" (UniqueName: \"kubernetes.io/projected/78131e28-13b9-46ee-b506-d7d79f747263-kube-api-access-hg5r2\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.067239 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/78131e28-13b9-46ee-b506-d7d79f747263-openshift-service-ca\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.068270 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/78131e28-13b9-46ee-b506-d7d79f747263-openshift-service-ca\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.092528 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78131e28-13b9-46ee-b506-d7d79f747263-apiservice-cert\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.095290 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78131e28-13b9-46ee-b506-d7d79f747263-webhook-cert\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.130444 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg5r2\" (UniqueName: \"kubernetes.io/projected/78131e28-13b9-46ee-b506-d7d79f747263-kube-api-access-hg5r2\") pod \"perses-operator-6b7c6d77c9-8v544\" (UID: \"78131e28-13b9-46ee-b506-d7d79f747263\") " pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.195925 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.199740 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-587f9c8867-zp5zg"] Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.552805 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerStarted","Data":"72f1f1e8942b544624dad02e238e215bb97bcffced9bcaaa6e936c7129a97bf6"} Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.555034 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" event={"ID":"343fff88-557a-4473-b878-7badd8470e8c","Type":"ContainerStarted","Data":"3a3e60115624a6d4c44378b26631c30dcd1e8ca9bdd52af9b7386c5d868584da"} Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.555963 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" event={"ID":"2252d2cb-9b32-4137-8368-8b6c9bf4a267","Type":"ContainerStarted","Data":"cb73c91b0f0de8d4391ac311ed5be41f81231a30b9b005d64919d23a3bf4c401"} Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.569812 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mz6xv" podStartSLOduration=9.375415316 podStartE2EDuration="10.569793498s" podCreationTimestamp="2026-03-20 00:21:35 +0000 UTC" firstStartedPulling="2026-03-20 00:21:40.409321062 +0000 UTC m=+754.843055106" lastFinishedPulling="2026-03-20 00:21:41.603699234 +0000 UTC m=+756.037433288" observedRunningTime="2026-03-20 00:21:45.568420826 +0000 UTC m=+760.002154880" watchObservedRunningTime="2026-03-20 00:21:45.569793498 +0000 UTC m=+760.003527552" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.912925 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:45 crc kubenswrapper[5106]: I0320 00:21:45.913223 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:46 crc kubenswrapper[5106]: W0320 00:21:46.149437 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeca89ef6_a7ff_48b6_a250_3b10b73a40be.slice/crio-acca2cf6a3492989fe499c13bd022e2f281706c5e6a003d18e02da55bdc52189 WatchSource:0}: Error finding container acca2cf6a3492989fe499c13bd022e2f281706c5e6a003d18e02da55bdc52189: Status 404 returned error can't find the container with id acca2cf6a3492989fe499c13bd022e2f281706c5e6a003d18e02da55bdc52189 Mar 20 00:21:46 crc kubenswrapper[5106]: I0320 00:21:46.529929 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-6b7c6d77c9-8v544"] Mar 20 00:21:46 crc kubenswrapper[5106]: W0320 00:21:46.564512 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78131e28_13b9_46ee_b506_d7d79f747263.slice/crio-0899fabaa6f725e3618782770012a50935b347f5b0c835fefef3b58ced99ef44 WatchSource:0}: Error finding container 0899fabaa6f725e3618782770012a50935b347f5b0c835fefef3b58ced99ef44: Status 404 returned error can't find the container with id 0899fabaa6f725e3618782770012a50935b347f5b0c835fefef3b58ced99ef44 Mar 20 00:21:46 crc kubenswrapper[5106]: I0320 00:21:46.583976 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" event={"ID":"eca89ef6-a7ff-48b6-a250-3b10b73a40be","Type":"ContainerStarted","Data":"acca2cf6a3492989fe499c13bd022e2f281706c5e6a003d18e02da55bdc52189"} Mar 20 00:21:46 crc kubenswrapper[5106]: I0320 00:21:46.993189 5106 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mz6xv" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="registry-server" probeResult="failure" output=< Mar 20 00:21:46 crc kubenswrapper[5106]: timeout: failed to connect service ":50051" within 1s Mar 20 00:21:46 crc kubenswrapper[5106]: > Mar 20 00:21:47 crc kubenswrapper[5106]: I0320 00:21:47.601054 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" event={"ID":"78131e28-13b9-46ee-b506-d7d79f747263","Type":"ContainerStarted","Data":"0899fabaa6f725e3618782770012a50935b347f5b0c835fefef3b58ced99ef44"} Mar 20 00:21:51 crc kubenswrapper[5106]: I0320 00:21:51.467465 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:51 crc kubenswrapper[5106]: I0320 00:21:51.544498 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:21:53 crc kubenswrapper[5106]: I0320 00:21:53.452440 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhkc7"] Mar 20 00:21:53 crc kubenswrapper[5106]: I0320 00:21:53.453104 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhkc7" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="registry-server" containerID="cri-o://b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284" gracePeriod=2 Mar 20 00:21:53 crc kubenswrapper[5106]: I0320 00:21:53.687664 5106 generic.go:358] "Generic (PLEG): container finished" podID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerID="b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284" exitCode=0 Mar 20 00:21:53 crc kubenswrapper[5106]: I0320 00:21:53.687754 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerDied","Data":"b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284"} Mar 20 00:21:55 crc kubenswrapper[5106]: I0320 00:21:55.373385 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:21:55 crc kubenswrapper[5106]: I0320 00:21:55.373474 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:21:56 crc kubenswrapper[5106]: I0320 00:21:56.011783 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:56 crc kubenswrapper[5106]: I0320 00:21:56.098545 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.619280 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn"] Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.717082 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn"] Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.717232 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.719554 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.719555 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-5zv6b\"" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.719656 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.859326 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4mzn\" (UniqueName: \"kubernetes.io/projected/0c41ac2a-599d-4117-b60d-48ca991ac762-kube-api-access-p4mzn\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-t9qmn\" (UID: \"0c41ac2a-599d-4117-b60d-48ca991ac762\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.859378 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c41ac2a-599d-4117-b60d-48ca991ac762-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-t9qmn\" (UID: \"0c41ac2a-599d-4117-b60d-48ca991ac762\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.960860 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4mzn\" (UniqueName: \"kubernetes.io/projected/0c41ac2a-599d-4117-b60d-48ca991ac762-kube-api-access-p4mzn\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-t9qmn\" (UID: \"0c41ac2a-599d-4117-b60d-48ca991ac762\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.960923 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c41ac2a-599d-4117-b60d-48ca991ac762-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-t9qmn\" (UID: \"0c41ac2a-599d-4117-b60d-48ca991ac762\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.961634 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0c41ac2a-599d-4117-b60d-48ca991ac762-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-t9qmn\" (UID: \"0c41ac2a-599d-4117-b60d-48ca991ac762\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:58 crc kubenswrapper[5106]: I0320 00:21:58.993553 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4mzn\" (UniqueName: \"kubernetes.io/projected/0c41ac2a-599d-4117-b60d-48ca991ac762-kube-api-access-p4mzn\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-t9qmn\" (UID: \"0c41ac2a-599d-4117-b60d-48ca991ac762\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:21:59 crc kubenswrapper[5106]: I0320 00:21:59.038718 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.132905 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566102-tjhlf"] Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.138179 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.141253 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566102-tjhlf"] Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.141994 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.142102 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.142261 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.188711 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgvq\" (UniqueName: \"kubernetes.io/projected/d134799c-135a-45ed-910c-a8a191d5232d-kube-api-access-rzgvq\") pod \"auto-csr-approver-29566102-tjhlf\" (UID: \"d134799c-135a-45ed-910c-a8a191d5232d\") " pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.291286 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rzgvq\" (UniqueName: \"kubernetes.io/projected/d134799c-135a-45ed-910c-a8a191d5232d-kube-api-access-rzgvq\") pod \"auto-csr-approver-29566102-tjhlf\" (UID: \"d134799c-135a-45ed-910c-a8a191d5232d\") " pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.317490 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzgvq\" (UniqueName: \"kubernetes.io/projected/d134799c-135a-45ed-910c-a8a191d5232d-kube-api-access-rzgvq\") pod \"auto-csr-approver-29566102-tjhlf\" (UID: \"d134799c-135a-45ed-910c-a8a191d5232d\") " pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.456710 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.860052 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mz6xv"] Mar 20 00:22:00 crc kubenswrapper[5106]: I0320 00:22:00.860722 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mz6xv" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="registry-server" containerID="cri-o://72f1f1e8942b544624dad02e238e215bb97bcffced9bcaaa6e936c7129a97bf6" gracePeriod=2 Mar 20 00:22:01 crc kubenswrapper[5106]: E0320 00:22:01.469930 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284 is running failed: container process not found" containerID="b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284" cmd=["grpc_health_probe","-addr=:50051"] Mar 20 00:22:01 crc kubenswrapper[5106]: E0320 00:22:01.470357 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284 is running failed: container process not found" containerID="b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284" cmd=["grpc_health_probe","-addr=:50051"] Mar 20 00:22:01 crc kubenswrapper[5106]: E0320 00:22:01.470594 5106 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284 is running failed: container process not found" containerID="b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284" cmd=["grpc_health_probe","-addr=:50051"] Mar 20 00:22:01 crc kubenswrapper[5106]: E0320 00:22:01.470628 5106 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rhkc7" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="registry-server" probeResult="unknown" Mar 20 00:22:01 crc kubenswrapper[5106]: I0320 00:22:01.773332 5106 generic.go:358] "Generic (PLEG): container finished" podID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerID="72f1f1e8942b544624dad02e238e215bb97bcffced9bcaaa6e936c7129a97bf6" exitCode=0 Mar 20 00:22:01 crc kubenswrapper[5106]: I0320 00:22:01.773397 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerDied","Data":"72f1f1e8942b544624dad02e238e215bb97bcffced9bcaaa6e936c7129a97bf6"} Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.520437 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.630907 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-catalog-content\") pod \"9b2de389-39e2-4ebb-b208-248ff53c060b\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.631131 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms4vt\" (UniqueName: \"kubernetes.io/projected/9b2de389-39e2-4ebb-b208-248ff53c060b-kube-api-access-ms4vt\") pod \"9b2de389-39e2-4ebb-b208-248ff53c060b\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.631149 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-utilities\") pod \"9b2de389-39e2-4ebb-b208-248ff53c060b\" (UID: \"9b2de389-39e2-4ebb-b208-248ff53c060b\") " Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.632357 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-utilities" (OuterVolumeSpecName: "utilities") pod "9b2de389-39e2-4ebb-b208-248ff53c060b" (UID: "9b2de389-39e2-4ebb-b208-248ff53c060b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.644812 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2de389-39e2-4ebb-b208-248ff53c060b-kube-api-access-ms4vt" (OuterVolumeSpecName: "kube-api-access-ms4vt") pod "9b2de389-39e2-4ebb-b208-248ff53c060b" (UID: "9b2de389-39e2-4ebb-b208-248ff53c060b"). InnerVolumeSpecName "kube-api-access-ms4vt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.711275 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b2de389-39e2-4ebb-b208-248ff53c060b" (UID: "9b2de389-39e2-4ebb-b208-248ff53c060b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.734359 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ms4vt\" (UniqueName: \"kubernetes.io/projected/9b2de389-39e2-4ebb-b208-248ff53c060b-kube-api-access-ms4vt\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.734401 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.734411 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b2de389-39e2-4ebb-b208-248ff53c060b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.800304 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhkc7" event={"ID":"9b2de389-39e2-4ebb-b208-248ff53c060b","Type":"ContainerDied","Data":"b2fe200062dfee25b36bf07eea57055d9e3942ea3f49e7cd04619274dce73a95"} Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.800600 5106 scope.go:117] "RemoveContainer" containerID="b8426451d2e19e2d93ee99b78940ec63e6d2509e6a35b1604cf04db53156b284" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.800687 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhkc7" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.857871 5106 scope.go:117] "RemoveContainer" containerID="46c81d624731c8707be77e08920139a5eca61083ebf2722bf7cc7a2446c849cf" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.890480 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566102-tjhlf"] Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.914032 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn"] Mar 20 00:22:02 crc kubenswrapper[5106]: W0320 00:22:02.921946 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd134799c_135a_45ed_910c_a8a191d5232d.slice/crio-b17971a94b5020841da3315e5bae42b7328f5a3b29462679b4fecb1ef78dc52d WatchSource:0}: Error finding container b17971a94b5020841da3315e5bae42b7328f5a3b29462679b4fecb1ef78dc52d: Status 404 returned error can't find the container with id b17971a94b5020841da3315e5bae42b7328f5a3b29462679b4fecb1ef78dc52d Mar 20 00:22:02 crc kubenswrapper[5106]: W0320 00:22:02.937853 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c41ac2a_599d_4117_b60d_48ca991ac762.slice/crio-683011d39b3da755ad5fb5c8d685b5f6c6c33767eb1bd7f9ebc82ff30ee5dd83 WatchSource:0}: Error finding container 683011d39b3da755ad5fb5c8d685b5f6c6c33767eb1bd7f9ebc82ff30ee5dd83: Status 404 returned error can't find the container with id 683011d39b3da755ad5fb5c8d685b5f6c6c33767eb1bd7f9ebc82ff30ee5dd83 Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.955990 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:22:02 crc kubenswrapper[5106]: I0320 00:22:02.983275 5106 scope.go:117] "RemoveContainer" containerID="928f5d9da567258fb7f444c6c5ae4305e98a2a1078d6b72faa954437f00bce41" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.012054 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhkc7"] Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.017227 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhkc7"] Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.044034 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf4wp\" (UniqueName: \"kubernetes.io/projected/f91136e0-e583-43e3-9c30-919d2f33efa6-kube-api-access-nf4wp\") pod \"f91136e0-e583-43e3-9c30-919d2f33efa6\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.044108 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-catalog-content\") pod \"f91136e0-e583-43e3-9c30-919d2f33efa6\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.044225 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-utilities\") pod \"f91136e0-e583-43e3-9c30-919d2f33efa6\" (UID: \"f91136e0-e583-43e3-9c30-919d2f33efa6\") " Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.046924 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-utilities" (OuterVolumeSpecName: "utilities") pod "f91136e0-e583-43e3-9c30-919d2f33efa6" (UID: "f91136e0-e583-43e3-9c30-919d2f33efa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.081838 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91136e0-e583-43e3-9c30-919d2f33efa6-kube-api-access-nf4wp" (OuterVolumeSpecName: "kube-api-access-nf4wp") pod "f91136e0-e583-43e3-9c30-919d2f33efa6" (UID: "f91136e0-e583-43e3-9c30-919d2f33efa6"). InnerVolumeSpecName "kube-api-access-nf4wp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.151356 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.151382 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nf4wp\" (UniqueName: \"kubernetes.io/projected/f91136e0-e583-43e3-9c30-919d2f33efa6-kube-api-access-nf4wp\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.201345 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" path="/var/lib/kubelet/pods/9b2de389-39e2-4ebb-b208-248ff53c060b/volumes" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.222305 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f91136e0-e583-43e3-9c30-919d2f33efa6" (UID: "f91136e0-e583-43e3-9c30-919d2f33efa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.252540 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91136e0-e583-43e3-9c30-919d2f33efa6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.807997 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" event={"ID":"343fff88-557a-4473-b878-7badd8470e8c","Type":"ContainerStarted","Data":"04bb1e13d581590acf819bdc862d4b3e0d8b9c10791c08113047318ab7a4852b"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.809755 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" event={"ID":"0c41ac2a-599d-4117-b60d-48ca991ac762","Type":"ContainerStarted","Data":"683011d39b3da755ad5fb5c8d685b5f6c6c33767eb1bd7f9ebc82ff30ee5dd83"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.811622 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" event={"ID":"a9535b92-2a0c-45e6-939e-878d22fec64e","Type":"ContainerStarted","Data":"dad8168ed3532c636dc293eee03be8632917520e055bcbf8ac011d7bddfe2958"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.814704 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" event={"ID":"2252d2cb-9b32-4137-8368-8b6c9bf4a267","Type":"ContainerStarted","Data":"17f9289dd1cf0ab94582c1554853980cadbe45eeba16ac15e1590ec38abb9f70"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.815894 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" event={"ID":"a5a00efc-0f62-4415-97e0-e0bfd2f1276a","Type":"ContainerStarted","Data":"83f0d7bcfbaa7e8945eed6c68e6b75d0363b553042e89e75481030afd2288dd0"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.818907 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mz6xv" event={"ID":"f91136e0-e583-43e3-9c30-919d2f33efa6","Type":"ContainerDied","Data":"86f73dbaa2c6d76aa1412679ca65222cb1e2d51b1bca96091d797b0098a67446"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.818942 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mz6xv" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.818949 5106 scope.go:117] "RemoveContainer" containerID="72f1f1e8942b544624dad02e238e215bb97bcffced9bcaaa6e936c7129a97bf6" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.824852 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" event={"ID":"78131e28-13b9-46ee-b506-d7d79f747263","Type":"ContainerStarted","Data":"c47621ddfef1077e34a4d5a93f130750c9fee8c2fe5401a455d58c01daf1cd58"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.824978 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.829373 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" event={"ID":"eca89ef6-a7ff-48b6-a250-3b10b73a40be","Type":"ContainerStarted","Data":"05f8dd88263f5dcefb7cf79b47d87faef69a84edafff95e90cd502e289a73d33"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.829865 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.833955 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-p8g95" podStartSLOduration=2.901374699 podStartE2EDuration="20.833934511s" podCreationTimestamp="2026-03-20 00:21:43 +0000 UTC" firstStartedPulling="2026-03-20 00:21:44.638003166 +0000 UTC m=+759.071737220" lastFinishedPulling="2026-03-20 00:22:02.570562978 +0000 UTC m=+777.004297032" observedRunningTime="2026-03-20 00:22:03.827369824 +0000 UTC m=+778.261103878" watchObservedRunningTime="2026-03-20 00:22:03.833934511 +0000 UTC m=+778.267668565" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.835041 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" event={"ID":"d134799c-135a-45ed-910c-a8a191d5232d","Type":"ContainerStarted","Data":"b17971a94b5020841da3315e5bae42b7328f5a3b29462679b4fecb1ef78dc52d"} Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.846591 5106 scope.go:117] "RemoveContainer" containerID="1726ab1b73c7e79577f5f2b9dd7c93d98f2d303295132958ca041cc188b3e924" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.848198 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-659dbf9598-bbc28" podStartSLOduration=3.043380764 podStartE2EDuration="20.848174374s" podCreationTimestamp="2026-03-20 00:21:43 +0000 UTC" firstStartedPulling="2026-03-20 00:21:44.767907989 +0000 UTC m=+759.201642043" lastFinishedPulling="2026-03-20 00:22:02.572701599 +0000 UTC m=+777.006435653" observedRunningTime="2026-03-20 00:22:03.847412976 +0000 UTC m=+778.281147030" watchObservedRunningTime="2026-03-20 00:22:03.848174374 +0000 UTC m=+778.281908428" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.878341 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" podStartSLOduration=3.888051607 podStartE2EDuration="19.87832842s" podCreationTimestamp="2026-03-20 00:21:44 +0000 UTC" firstStartedPulling="2026-03-20 00:21:46.580133691 +0000 UTC m=+761.013867735" lastFinishedPulling="2026-03-20 00:22:02.570410494 +0000 UTC m=+777.004144548" observedRunningTime="2026-03-20 00:22:03.877953551 +0000 UTC m=+778.311687605" watchObservedRunningTime="2026-03-20 00:22:03.87832842 +0000 UTC m=+778.312062474" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.888784 5106 scope.go:117] "RemoveContainer" containerID="4b144b60f4d952e9a8e4fa53cff5590ca5721682122419df4d1effa84c814544" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.891970 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.912304 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-f76db68c9-j9h6m" podStartSLOduration=2.8931147189999997 podStartE2EDuration="22.912287728s" podCreationTimestamp="2026-03-20 00:21:41 +0000 UTC" firstStartedPulling="2026-03-20 00:21:42.445359306 +0000 UTC m=+756.879093360" lastFinishedPulling="2026-03-20 00:22:02.464532315 +0000 UTC m=+776.898266369" observedRunningTime="2026-03-20 00:22:03.908397314 +0000 UTC m=+778.342131358" watchObservedRunningTime="2026-03-20 00:22:03.912287728 +0000 UTC m=+778.346021782" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.955225 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-55568fc96c-xxrkx" podStartSLOduration=2.484005392 podStartE2EDuration="20.955198911s" podCreationTimestamp="2026-03-20 00:21:43 +0000 UTC" firstStartedPulling="2026-03-20 00:21:44.103619131 +0000 UTC m=+758.537353185" lastFinishedPulling="2026-03-20 00:22:02.57481265 +0000 UTC m=+777.008546704" observedRunningTime="2026-03-20 00:22:03.944899433 +0000 UTC m=+778.378633507" watchObservedRunningTime="2026-03-20 00:22:03.955198911 +0000 UTC m=+778.388932965" Mar 20 00:22:03 crc kubenswrapper[5106]: I0320 00:22:03.975473 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-587f9c8867-zp5zg" podStartSLOduration=3.553037471 podStartE2EDuration="19.975451018s" podCreationTimestamp="2026-03-20 00:21:44 +0000 UTC" firstStartedPulling="2026-03-20 00:21:46.152870544 +0000 UTC m=+760.586604598" lastFinishedPulling="2026-03-20 00:22:02.575284091 +0000 UTC m=+777.009018145" observedRunningTime="2026-03-20 00:22:03.968633364 +0000 UTC m=+778.402367438" watchObservedRunningTime="2026-03-20 00:22:03.975451018 +0000 UTC m=+778.409185072" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.006847 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mz6xv"] Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.023573 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mz6xv"] Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.264866 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265509 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="extract-content" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265522 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="extract-content" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265535 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="extract-content" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265540 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="extract-content" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265546 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="registry-server" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265552 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="registry-server" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265570 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="registry-server" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265591 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="registry-server" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265605 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="extract-utilities" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265610 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="extract-utilities" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265620 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="extract-utilities" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265626 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="extract-utilities" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265758 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" containerName="registry-server" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.265775 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="9b2de389-39e2-4ebb-b208-248ff53c060b" containerName="registry-server" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.269415 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.282338 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.282965 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.283337 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.283601 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.283803 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.283891 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.283808 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.284390 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.285102 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-lfsv4\"" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366654 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366723 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366756 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366787 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366810 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366854 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366894 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366922 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366949 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.366994 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.367025 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.367073 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.367109 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.367138 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.367164 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.372797 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.468565 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.468638 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470302 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470379 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470422 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470444 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470474 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470502 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470522 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470544 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470566 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470669 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470720 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470783 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470813 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.470867 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.471125 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.471396 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.472395 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.472637 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.473020 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.473339 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.474351 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.476772 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.477109 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.477839 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.481303 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.481796 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.483482 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.484168 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/6a45ec79-6631-4cc3-a937-0b5e42ec3c8c-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c\") " pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.587966 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.848029 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" event={"ID":"d134799c-135a-45ed-910c-a8a191d5232d","Type":"ContainerStarted","Data":"df1bbd8ee42e00d6fccd62e0d5e65b872d26a56101b94968ec35a3fcb8b0a0ce"} Mar 20 00:22:04 crc kubenswrapper[5106]: I0320 00:22:04.863447 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" podStartSLOduration=3.528043997 podStartE2EDuration="4.863431204s" podCreationTimestamp="2026-03-20 00:22:00 +0000 UTC" firstStartedPulling="2026-03-20 00:22:02.923022803 +0000 UTC m=+777.356756857" lastFinishedPulling="2026-03-20 00:22:04.25841001 +0000 UTC m=+778.692144064" observedRunningTime="2026-03-20 00:22:04.861901727 +0000 UTC m=+779.295635781" watchObservedRunningTime="2026-03-20 00:22:04.863431204 +0000 UTC m=+779.297165258" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.173717 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91136e0-e583-43e3-9c30-919d2f33efa6" path="/var/lib/kubelet/pods/f91136e0-e583-43e3-9c30-919d2f33efa6/volumes" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.206057 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.340245 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" podUID="00c02264-3068-4287-a30a-13b0003bf5e1" containerName="registry" containerID="cri-o://3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193" gracePeriod=30 Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.851092 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.928421 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c","Type":"ContainerStarted","Data":"29c14b798ce986e3240354d119a76328ffbfaa9c7a08c5ed5ba155a32c6034b9"} Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.940804 5106 generic.go:358] "Generic (PLEG): container finished" podID="00c02264-3068-4287-a30a-13b0003bf5e1" containerID="3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193" exitCode=0 Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.940952 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" event={"ID":"00c02264-3068-4287-a30a-13b0003bf5e1","Type":"ContainerDied","Data":"3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193"} Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.940983 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" event={"ID":"00c02264-3068-4287-a30a-13b0003bf5e1","Type":"ContainerDied","Data":"85522d9bbf7329891ad6f933eca0439ee543787d92231736b39ac6b3f5bd1a46"} Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.940999 5106 scope.go:117] "RemoveContainer" containerID="3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.941190 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-jhgps" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.943672 5106 generic.go:358] "Generic (PLEG): container finished" podID="d134799c-135a-45ed-910c-a8a191d5232d" containerID="df1bbd8ee42e00d6fccd62e0d5e65b872d26a56101b94968ec35a3fcb8b0a0ce" exitCode=0 Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.943998 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" event={"ID":"d134799c-135a-45ed-910c-a8a191d5232d","Type":"ContainerDied","Data":"df1bbd8ee42e00d6fccd62e0d5e65b872d26a56101b94968ec35a3fcb8b0a0ce"} Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.965047 5106 scope.go:117] "RemoveContainer" containerID="3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193" Mar 20 00:22:05 crc kubenswrapper[5106]: E0320 00:22:05.967042 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193\": container with ID starting with 3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193 not found: ID does not exist" containerID="3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.967090 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193"} err="failed to get container status \"3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193\": rpc error: code = NotFound desc = could not find container \"3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193\": container with ID starting with 3b7390025d59e196f3ea42120ba85ed882256a7dc1b39b71ab927e35a2839193 not found: ID does not exist" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.997414 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-registry-certificates\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.997477 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-registry-tls\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.997497 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-bound-sa-token\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.997518 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00c02264-3068-4287-a30a-13b0003bf5e1-installation-pull-secrets\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.997540 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00c02264-3068-4287-a30a-13b0003bf5e1-ca-trust-extracted\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.997616 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bx85\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-kube-api-access-5bx85\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.998946 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.999183 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.999235 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-trusted-ca\") pod \"00c02264-3068-4287-a30a-13b0003bf5e1\" (UID: \"00c02264-3068-4287-a30a-13b0003bf5e1\") " Mar 20 00:22:05 crc kubenswrapper[5106]: I0320 00:22:05.999630 5106 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.003886 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.004696 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.006266 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00c02264-3068-4287-a30a-13b0003bf5e1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.006311 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.008283 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-kube-api-access-5bx85" (OuterVolumeSpecName: "kube-api-access-5bx85") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "kube-api-access-5bx85". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.021280 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.025641 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00c02264-3068-4287-a30a-13b0003bf5e1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "00c02264-3068-4287-a30a-13b0003bf5e1" (UID: "00c02264-3068-4287-a30a-13b0003bf5e1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.102703 5106 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.102745 5106 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.102759 5106 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/00c02264-3068-4287-a30a-13b0003bf5e1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.102774 5106 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/00c02264-3068-4287-a30a-13b0003bf5e1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.102785 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bx85\" (UniqueName: \"kubernetes.io/projected/00c02264-3068-4287-a30a-13b0003bf5e1-kube-api-access-5bx85\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.102792 5106 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/00c02264-3068-4287-a30a-13b0003bf5e1-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.291954 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jhgps"] Mar 20 00:22:06 crc kubenswrapper[5106]: I0320 00:22:06.298228 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-jhgps"] Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.181327 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00c02264-3068-4287-a30a-13b0003bf5e1" path="/var/lib/kubelet/pods/00c02264-3068-4287-a30a-13b0003bf5e1/volumes" Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.344882 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.424005 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzgvq\" (UniqueName: \"kubernetes.io/projected/d134799c-135a-45ed-910c-a8a191d5232d-kube-api-access-rzgvq\") pod \"d134799c-135a-45ed-910c-a8a191d5232d\" (UID: \"d134799c-135a-45ed-910c-a8a191d5232d\") " Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.431860 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d134799c-135a-45ed-910c-a8a191d5232d-kube-api-access-rzgvq" (OuterVolumeSpecName: "kube-api-access-rzgvq") pod "d134799c-135a-45ed-910c-a8a191d5232d" (UID: "d134799c-135a-45ed-910c-a8a191d5232d"). InnerVolumeSpecName "kube-api-access-rzgvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.525567 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzgvq\" (UniqueName: \"kubernetes.io/projected/d134799c-135a-45ed-910c-a8a191d5232d-kube-api-access-rzgvq\") on node \"crc\" DevicePath \"\"" Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.920980 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566096-hb8rr"] Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.930216 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566096-hb8rr"] Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.972009 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" event={"ID":"d134799c-135a-45ed-910c-a8a191d5232d","Type":"ContainerDied","Data":"b17971a94b5020841da3315e5bae42b7328f5a3b29462679b4fecb1ef78dc52d"} Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.972053 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b17971a94b5020841da3315e5bae42b7328f5a3b29462679b4fecb1ef78dc52d" Mar 20 00:22:07 crc kubenswrapper[5106]: I0320 00:22:07.972118 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566102-tjhlf" Mar 20 00:22:09 crc kubenswrapper[5106]: I0320 00:22:09.172046 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b27eda-de8d-498f-b0e2-67e1c6aafd18" path="/var/lib/kubelet/pods/a2b27eda-de8d-498f-b0e2-67e1c6aafd18/volumes" Mar 20 00:22:13 crc kubenswrapper[5106]: I0320 00:22:13.033980 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" event={"ID":"0c41ac2a-599d-4117-b60d-48ca991ac762","Type":"ContainerStarted","Data":"fbd9ed2301176d79cda23807f2402191eaf274f6d646c48a4d45ed6800426055"} Mar 20 00:22:13 crc kubenswrapper[5106]: I0320 00:22:13.054365 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-t9qmn" podStartSLOduration=5.383797385 podStartE2EDuration="15.054347677s" podCreationTimestamp="2026-03-20 00:21:58 +0000 UTC" firstStartedPulling="2026-03-20 00:22:02.942348408 +0000 UTC m=+777.376082462" lastFinishedPulling="2026-03-20 00:22:12.6128987 +0000 UTC m=+787.046632754" observedRunningTime="2026-03-20 00:22:13.05405208 +0000 UTC m=+787.487786154" watchObservedRunningTime="2026-03-20 00:22:13.054347677 +0000 UTC m=+787.488081721" Mar 20 00:22:14 crc kubenswrapper[5106]: I0320 00:22:14.854338 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-6b7c6d77c9-8v544" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.653593 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-vc4jz"] Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.654427 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d134799c-135a-45ed-910c-a8a191d5232d" containerName="oc" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.654440 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d134799c-135a-45ed-910c-a8a191d5232d" containerName="oc" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.654479 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="00c02264-3068-4287-a30a-13b0003bf5e1" containerName="registry" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.654489 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="00c02264-3068-4287-a30a-13b0003bf5e1" containerName="registry" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.654625 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="00c02264-3068-4287-a30a-13b0003bf5e1" containerName="registry" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.654639 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="d134799c-135a-45ed-910c-a8a191d5232d" containerName="oc" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.658755 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.665688 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-vc4jz"] Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.672612 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-fdnnc\"" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.672751 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.672799 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.777046 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3971612b-b9d3-4678-859a-01070cad10d1-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-vc4jz\" (UID: \"3971612b-b9d3-4678-859a-01070cad10d1\") " pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.777150 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4rf\" (UniqueName: \"kubernetes.io/projected/3971612b-b9d3-4678-859a-01070cad10d1-kube-api-access-wg4rf\") pod \"cert-manager-webhook-597b96b99b-vc4jz\" (UID: \"3971612b-b9d3-4678-859a-01070cad10d1\") " pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.878672 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3971612b-b9d3-4678-859a-01070cad10d1-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-vc4jz\" (UID: \"3971612b-b9d3-4678-859a-01070cad10d1\") " pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.878709 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4rf\" (UniqueName: \"kubernetes.io/projected/3971612b-b9d3-4678-859a-01070cad10d1-kube-api-access-wg4rf\") pod \"cert-manager-webhook-597b96b99b-vc4jz\" (UID: \"3971612b-b9d3-4678-859a-01070cad10d1\") " pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.898810 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3971612b-b9d3-4678-859a-01070cad10d1-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-vc4jz\" (UID: \"3971612b-b9d3-4678-859a-01070cad10d1\") " pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.901713 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4rf\" (UniqueName: \"kubernetes.io/projected/3971612b-b9d3-4678-859a-01070cad10d1-kube-api-access-wg4rf\") pod \"cert-manager-webhook-597b96b99b-vc4jz\" (UID: \"3971612b-b9d3-4678-859a-01070cad10d1\") " pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:17 crc kubenswrapper[5106]: I0320 00:22:17.976315 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.272418 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-5s6ln"] Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.281826 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-5s6ln"] Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.281956 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.284105 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-kt8vv\"" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.403656 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kth9\" (UniqueName: \"kubernetes.io/projected/2e751cb4-8673-4c7b-91fd-8d080e2ddcfd-kube-api-access-9kth9\") pod \"cert-manager-cainjector-8966b78d4-5s6ln\" (UID: \"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.403798 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e751cb4-8673-4c7b-91fd-8d080e2ddcfd-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-5s6ln\" (UID: \"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.506021 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e751cb4-8673-4c7b-91fd-8d080e2ddcfd-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-5s6ln\" (UID: \"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.506516 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9kth9\" (UniqueName: \"kubernetes.io/projected/2e751cb4-8673-4c7b-91fd-8d080e2ddcfd-kube-api-access-9kth9\") pod \"cert-manager-cainjector-8966b78d4-5s6ln\" (UID: \"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.527132 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e751cb4-8673-4c7b-91fd-8d080e2ddcfd-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-5s6ln\" (UID: \"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.528230 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kth9\" (UniqueName: \"kubernetes.io/projected/2e751cb4-8673-4c7b-91fd-8d080e2ddcfd-kube-api-access-9kth9\") pod \"cert-manager-cainjector-8966b78d4-5s6ln\" (UID: \"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:19 crc kubenswrapper[5106]: I0320 00:22:19.598163 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" Mar 20 00:22:23 crc kubenswrapper[5106]: I0320 00:22:23.191038 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-vc4jz"] Mar 20 00:22:23 crc kubenswrapper[5106]: I0320 00:22:23.601554 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-5s6ln"] Mar 20 00:22:23 crc kubenswrapper[5106]: W0320 00:22:23.608327 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e751cb4_8673_4c7b_91fd_8d080e2ddcfd.slice/crio-7ec8390353c55fb4b63d8a3822c21806118c40b8bdf2f4f2e9198b31b26d99c1 WatchSource:0}: Error finding container 7ec8390353c55fb4b63d8a3822c21806118c40b8bdf2f4f2e9198b31b26d99c1: Status 404 returned error can't find the container with id 7ec8390353c55fb4b63d8a3822c21806118c40b8bdf2f4f2e9198b31b26d99c1 Mar 20 00:22:24 crc kubenswrapper[5106]: I0320 00:22:24.200442 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c","Type":"ContainerStarted","Data":"73d1c473f672c143dcb6a3604c93b8bae6c635eee7410566da50a6981c9c4f63"} Mar 20 00:22:24 crc kubenswrapper[5106]: I0320 00:22:24.202782 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" event={"ID":"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd","Type":"ContainerStarted","Data":"7ec8390353c55fb4b63d8a3822c21806118c40b8bdf2f4f2e9198b31b26d99c1"} Mar 20 00:22:24 crc kubenswrapper[5106]: I0320 00:22:24.204884 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" event={"ID":"3971612b-b9d3-4678-859a-01070cad10d1","Type":"ContainerStarted","Data":"05962e52436964a27f4e2274bef884141c606d80e7ebfe6bd53e358f77cdd117"} Mar 20 00:22:24 crc kubenswrapper[5106]: I0320 00:22:24.363360 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Mar 20 00:22:24 crc kubenswrapper[5106]: I0320 00:22:24.395727 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Mar 20 00:22:25 crc kubenswrapper[5106]: I0320 00:22:25.373805 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:22:25 crc kubenswrapper[5106]: I0320 00:22:25.373890 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:22:26 crc kubenswrapper[5106]: I0320 00:22:26.223943 5106 generic.go:358] "Generic (PLEG): container finished" podID="6a45ec79-6631-4cc3-a937-0b5e42ec3c8c" containerID="73d1c473f672c143dcb6a3604c93b8bae6c635eee7410566da50a6981c9c4f63" exitCode=0 Mar 20 00:22:26 crc kubenswrapper[5106]: I0320 00:22:26.224050 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c","Type":"ContainerDied","Data":"73d1c473f672c143dcb6a3604c93b8bae6c635eee7410566da50a6981c9c4f63"} Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.231394 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" event={"ID":"3971612b-b9d3-4678-859a-01070cad10d1","Type":"ContainerStarted","Data":"9d861cfb37e026ba120c6ee2b64006cdc1ea8e3b18428bc36509367595989fa8"} Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.232316 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.234270 5106 generic.go:358] "Generic (PLEG): container finished" podID="6a45ec79-6631-4cc3-a937-0b5e42ec3c8c" containerID="582d1e08335d520ef1f94c4f563ebe52525803be996a069d1cd5cb8530f79c02" exitCode=0 Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.234345 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c","Type":"ContainerDied","Data":"582d1e08335d520ef1f94c4f563ebe52525803be996a069d1cd5cb8530f79c02"} Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.236374 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" event={"ID":"2e751cb4-8673-4c7b-91fd-8d080e2ddcfd","Type":"ContainerStarted","Data":"9df880b6360dcaa0ccb735d7d19ecfcad2eb9f3b5b5814165bb01b22f7c24c17"} Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.253511 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" podStartSLOduration=6.415241728 podStartE2EDuration="10.253489168s" podCreationTimestamp="2026-03-20 00:22:17 +0000 UTC" firstStartedPulling="2026-03-20 00:22:23.195830356 +0000 UTC m=+797.629564410" lastFinishedPulling="2026-03-20 00:22:27.034077796 +0000 UTC m=+801.467811850" observedRunningTime="2026-03-20 00:22:27.248900858 +0000 UTC m=+801.682634912" watchObservedRunningTime="2026-03-20 00:22:27.253489168 +0000 UTC m=+801.687223222" Mar 20 00:22:27 crc kubenswrapper[5106]: I0320 00:22:27.263516 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-5s6ln" podStartSLOduration=4.835593768 podStartE2EDuration="8.263495999s" podCreationTimestamp="2026-03-20 00:22:19 +0000 UTC" firstStartedPulling="2026-03-20 00:22:23.611264727 +0000 UTC m=+798.044998771" lastFinishedPulling="2026-03-20 00:22:27.039166948 +0000 UTC m=+801.472901002" observedRunningTime="2026-03-20 00:22:27.26188952 +0000 UTC m=+801.695623584" watchObservedRunningTime="2026-03-20 00:22:27.263495999 +0000 UTC m=+801.697230053" Mar 20 00:22:28 crc kubenswrapper[5106]: I0320 00:22:28.244434 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"6a45ec79-6631-4cc3-a937-0b5e42ec3c8c","Type":"ContainerStarted","Data":"bd9198b3a2d1f214c7eb897c18b533496abd43a8bd7745e0251b6af4cdccc486"} Mar 20 00:22:28 crc kubenswrapper[5106]: I0320 00:22:28.245166 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:28 crc kubenswrapper[5106]: I0320 00:22:28.280000 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.2044406 podStartE2EDuration="24.279983519s" podCreationTimestamp="2026-03-20 00:22:04 +0000 UTC" firstStartedPulling="2026-03-20 00:22:05.20968543 +0000 UTC m=+779.643419484" lastFinishedPulling="2026-03-20 00:22:23.285228309 +0000 UTC m=+797.718962403" observedRunningTime="2026-03-20 00:22:28.278716069 +0000 UTC m=+802.712450143" watchObservedRunningTime="2026-03-20 00:22:28.279983519 +0000 UTC m=+802.713717573" Mar 20 00:22:33 crc kubenswrapper[5106]: I0320 00:22:33.249552 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-vc4jz" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.659360 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-xrn2l"] Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.690047 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-xrn2l"] Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.690319 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.694406 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-rh58f\"" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.778604 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ab713b6-d230-4252-9220-51441f61c903-bound-sa-token\") pod \"cert-manager-759f64656b-xrn2l\" (UID: \"0ab713b6-d230-4252-9220-51441f61c903\") " pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.779020 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmczz\" (UniqueName: \"kubernetes.io/projected/0ab713b6-d230-4252-9220-51441f61c903-kube-api-access-tmczz\") pod \"cert-manager-759f64656b-xrn2l\" (UID: \"0ab713b6-d230-4252-9220-51441f61c903\") " pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.880924 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ab713b6-d230-4252-9220-51441f61c903-bound-sa-token\") pod \"cert-manager-759f64656b-xrn2l\" (UID: \"0ab713b6-d230-4252-9220-51441f61c903\") " pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.880976 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmczz\" (UniqueName: \"kubernetes.io/projected/0ab713b6-d230-4252-9220-51441f61c903-kube-api-access-tmczz\") pod \"cert-manager-759f64656b-xrn2l\" (UID: \"0ab713b6-d230-4252-9220-51441f61c903\") " pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.905970 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmczz\" (UniqueName: \"kubernetes.io/projected/0ab713b6-d230-4252-9220-51441f61c903-kube-api-access-tmczz\") pod \"cert-manager-759f64656b-xrn2l\" (UID: \"0ab713b6-d230-4252-9220-51441f61c903\") " pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:36 crc kubenswrapper[5106]: I0320 00:22:36.911493 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ab713b6-d230-4252-9220-51441f61c903-bound-sa-token\") pod \"cert-manager-759f64656b-xrn2l\" (UID: \"0ab713b6-d230-4252-9220-51441f61c903\") " pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:37 crc kubenswrapper[5106]: I0320 00:22:37.022736 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-xrn2l" Mar 20 00:22:37 crc kubenswrapper[5106]: I0320 00:22:37.447288 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-xrn2l"] Mar 20 00:22:37 crc kubenswrapper[5106]: I0320 00:22:37.515857 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-xrn2l" event={"ID":"0ab713b6-d230-4252-9220-51441f61c903","Type":"ContainerStarted","Data":"de3c1213e3a732b345c7e00ff55f91cd1b64c376a665eb62de6435074803227c"} Mar 20 00:22:38 crc kubenswrapper[5106]: I0320 00:22:38.523328 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-xrn2l" event={"ID":"0ab713b6-d230-4252-9220-51441f61c903","Type":"ContainerStarted","Data":"fcc8a9258f7e1caad5c15bd3d8a323009307a04639f1bb5e30c8e5f7fd11c571"} Mar 20 00:22:38 crc kubenswrapper[5106]: I0320 00:22:38.539439 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-xrn2l" podStartSLOduration=2.539418828 podStartE2EDuration="2.539418828s" podCreationTimestamp="2026-03-20 00:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:22:38.538556637 +0000 UTC m=+812.972290701" watchObservedRunningTime="2026-03-20 00:22:38.539418828 +0000 UTC m=+812.973152882" Mar 20 00:22:39 crc kubenswrapper[5106]: I0320 00:22:39.391149 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="6a45ec79-6631-4cc3-a937-0b5e42ec3c8c" containerName="elasticsearch" probeResult="failure" output=< Mar 20 00:22:39 crc kubenswrapper[5106]: {"timestamp": "2026-03-20T00:22:39+00:00", "message": "readiness probe failed", "curl_rc": "7"} Mar 20 00:22:39 crc kubenswrapper[5106]: > Mar 20 00:22:44 crc kubenswrapper[5106]: I0320 00:22:44.346940 5106 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="6a45ec79-6631-4cc3-a937-0b5e42ec3c8c" containerName="elasticsearch" probeResult="failure" output=< Mar 20 00:22:44 crc kubenswrapper[5106]: {"timestamp": "2026-03-20T00:22:44+00:00", "message": "readiness probe failed", "curl_rc": "7"} Mar 20 00:22:44 crc kubenswrapper[5106]: > Mar 20 00:22:44 crc kubenswrapper[5106]: I0320 00:22:44.915516 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Mar 20 00:22:44 crc kubenswrapper[5106]: I0320 00:22:44.924379 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Mar 20 00:22:44 crc kubenswrapper[5106]: I0320 00:22:44.924554 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:44 crc kubenswrapper[5106]: I0320 00:22:44.926810 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.095635 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22q6\" (UniqueName: \"kubernetes.io/projected/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-kube-api-access-v22q6\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.095721 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.095804 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.197664 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v22q6\" (UniqueName: \"kubernetes.io/projected/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-kube-api-access-v22q6\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.197748 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.197783 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.198424 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.198863 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.243644 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v22q6\" (UniqueName: \"kubernetes.io/projected/3c32ab68-794d-4696-b370-a5f8bf8a2d8d-kube-api-access-v22q6\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"3c32ab68-794d-4696-b370-a5f8bf8a2d8d\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.265845 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Mar 20 00:22:45 crc kubenswrapper[5106]: I0320 00:22:45.697464 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Mar 20 00:22:45 crc kubenswrapper[5106]: W0320 00:22:45.707635 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c32ab68_794d_4696_b370_a5f8bf8a2d8d.slice/crio-6c7e5a52032a990af7cbf4c986b78c5aad22c2f670ab3589b20e0f8b27f56e91 WatchSource:0}: Error finding container 6c7e5a52032a990af7cbf4c986b78c5aad22c2f670ab3589b20e0f8b27f56e91: Status 404 returned error can't find the container with id 6c7e5a52032a990af7cbf4c986b78c5aad22c2f670ab3589b20e0f8b27f56e91 Mar 20 00:22:46 crc kubenswrapper[5106]: I0320 00:22:46.580641 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"3c32ab68-794d-4696-b370-a5f8bf8a2d8d","Type":"ContainerStarted","Data":"6c7e5a52032a990af7cbf4c986b78c5aad22c2f670ab3589b20e0f8b27f56e91"} Mar 20 00:22:49 crc kubenswrapper[5106]: I0320 00:22:49.651597 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.373379 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.373809 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.373866 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.374750 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b9698c7bd4bd271067cba47912a53b2331be94e66a7a5d4468da4bc263f23f37"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.374830 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://b9698c7bd4bd271067cba47912a53b2331be94e66a7a5d4468da4bc263f23f37" gracePeriod=600 Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.645930 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="b9698c7bd4bd271067cba47912a53b2331be94e66a7a5d4468da4bc263f23f37" exitCode=0 Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.646005 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"b9698c7bd4bd271067cba47912a53b2331be94e66a7a5d4468da4bc263f23f37"} Mar 20 00:22:55 crc kubenswrapper[5106]: I0320 00:22:55.646381 5106 scope.go:117] "RemoveContainer" containerID="1228d087c7bde3c99c7452feeb09cc740b7b75ef32a544f2a368d4a749bf059b" Mar 20 00:22:56 crc kubenswrapper[5106]: I0320 00:22:56.655433 5106 generic.go:358] "Generic (PLEG): container finished" podID="3c32ab68-794d-4696-b370-a5f8bf8a2d8d" containerID="fef959268cd6b57ca42d7d582365d94a1ffb334ef93758a85ef6410bcc084929" exitCode=0 Mar 20 00:22:56 crc kubenswrapper[5106]: I0320 00:22:56.655491 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"3c32ab68-794d-4696-b370-a5f8bf8a2d8d","Type":"ContainerDied","Data":"fef959268cd6b57ca42d7d582365d94a1ffb334ef93758a85ef6410bcc084929"} Mar 20 00:22:56 crc kubenswrapper[5106]: I0320 00:22:56.660375 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"6986ec753318922c38954c5594d06021e7ff8e83bd99bff58c1e865b369e05df"} Mar 20 00:23:02 crc kubenswrapper[5106]: I0320 00:23:02.703852 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"3c32ab68-794d-4696-b370-a5f8bf8a2d8d","Type":"ContainerStarted","Data":"30fc2f4615df68e8c2a4bd43fac4686bdc51d3c465376f4554bbf81ffc442102"} Mar 20 00:23:02 crc kubenswrapper[5106]: I0320 00:23:02.737667 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=2.926450301 podStartE2EDuration="18.737633509s" podCreationTimestamp="2026-03-20 00:22:44 +0000 UTC" firstStartedPulling="2026-03-20 00:22:45.709813453 +0000 UTC m=+820.143547517" lastFinishedPulling="2026-03-20 00:23:01.520996631 +0000 UTC m=+835.954730725" observedRunningTime="2026-03-20 00:23:02.728399277 +0000 UTC m=+837.162133361" watchObservedRunningTime="2026-03-20 00:23:02.737633509 +0000 UTC m=+837.171367593" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.581343 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk"] Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.675923 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk"] Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.676149 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.766843 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.767688 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxjc4\" (UniqueName: \"kubernetes.io/projected/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-kube-api-access-zxjc4\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.767752 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.869528 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.869571 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zxjc4\" (UniqueName: \"kubernetes.io/projected/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-kube-api-access-zxjc4\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.869606 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.870264 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.870304 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.890876 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxjc4\" (UniqueName: \"kubernetes.io/projected/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-kube-api-access-zxjc4\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:03 crc kubenswrapper[5106]: I0320 00:23:03.994976 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:04 crc kubenswrapper[5106]: I0320 00:23:04.182987 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk"] Mar 20 00:23:04 crc kubenswrapper[5106]: W0320 00:23:04.190075 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0ca980f_6a69_4719_b7a4_1e4a4e01ecfc.slice/crio-b014a220988fc260fa0ce75c522381bab844ce32e4faba70e418396367f16c98 WatchSource:0}: Error finding container b014a220988fc260fa0ce75c522381bab844ce32e4faba70e418396367f16c98: Status 404 returned error can't find the container with id b014a220988fc260fa0ce75c522381bab844ce32e4faba70e418396367f16c98 Mar 20 00:23:04 crc kubenswrapper[5106]: I0320 00:23:04.716198 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" event={"ID":"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc","Type":"ContainerStarted","Data":"f0fdb7266e8d78b70d6617245ad71eab121a8d2f15439f94ebb1131c7557c7a0"} Mar 20 00:23:04 crc kubenswrapper[5106]: I0320 00:23:04.716635 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" event={"ID":"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc","Type":"ContainerStarted","Data":"b014a220988fc260fa0ce75c522381bab844ce32e4faba70e418396367f16c98"} Mar 20 00:23:05 crc kubenswrapper[5106]: I0320 00:23:05.726027 5106 generic.go:358] "Generic (PLEG): container finished" podID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerID="f0fdb7266e8d78b70d6617245ad71eab121a8d2f15439f94ebb1131c7557c7a0" exitCode=0 Mar 20 00:23:05 crc kubenswrapper[5106]: I0320 00:23:05.726183 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" event={"ID":"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc","Type":"ContainerDied","Data":"f0fdb7266e8d78b70d6617245ad71eab121a8d2f15439f94ebb1131c7557c7a0"} Mar 20 00:23:08 crc kubenswrapper[5106]: I0320 00:23:08.754421 5106 generic.go:358] "Generic (PLEG): container finished" podID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerID="3c5a334bdf4df1536c926fdf267eb423ddf0ae6a7f648486a7127f2e91312619" exitCode=0 Mar 20 00:23:08 crc kubenswrapper[5106]: I0320 00:23:08.754496 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" event={"ID":"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc","Type":"ContainerDied","Data":"3c5a334bdf4df1536c926fdf267eb423ddf0ae6a7f648486a7127f2e91312619"} Mar 20 00:23:09 crc kubenswrapper[5106]: I0320 00:23:09.762721 5106 generic.go:358] "Generic (PLEG): container finished" podID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerID="1d3e5baefb96efa622749d6c35b8d67606886d7a8daac85688b9f29b7bac2d88" exitCode=0 Mar 20 00:23:09 crc kubenswrapper[5106]: I0320 00:23:09.762859 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" event={"ID":"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc","Type":"ContainerDied","Data":"1d3e5baefb96efa622749d6c35b8d67606886d7a8daac85688b9f29b7bac2d88"} Mar 20 00:23:10 crc kubenswrapper[5106]: I0320 00:23:10.989487 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.080718 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-util\") pod \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.081005 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxjc4\" (UniqueName: \"kubernetes.io/projected/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-kube-api-access-zxjc4\") pod \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.081055 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-bundle\") pod \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\" (UID: \"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc\") " Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.082721 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-bundle" (OuterVolumeSpecName: "bundle") pod "d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" (UID: "d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.087009 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-kube-api-access-zxjc4" (OuterVolumeSpecName: "kube-api-access-zxjc4") pod "d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" (UID: "d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc"). InnerVolumeSpecName "kube-api-access-zxjc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.089610 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-util" (OuterVolumeSpecName: "util") pod "d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" (UID: "d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.182454 5106 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-util\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.182478 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zxjc4\" (UniqueName: \"kubernetes.io/projected/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-kube-api-access-zxjc4\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.182488 5106 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.778258 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" event={"ID":"d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc","Type":"ContainerDied","Data":"b014a220988fc260fa0ce75c522381bab844ce32e4faba70e418396367f16c98"} Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.778297 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661bpzlk" Mar 20 00:23:11 crc kubenswrapper[5106]: I0320 00:23:11.778306 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b014a220988fc260fa0ce75c522381bab844ce32e4faba70e418396367f16c98" Mar 20 00:23:12 crc kubenswrapper[5106]: I0320 00:23:12.650799 5106 scope.go:117] "RemoveContainer" containerID="42a205c7541758bf07cecb27cdf77e17342aa4826950f1106aa1ad6c1004fd0b" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.114072 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr"] Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.114999 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="extract" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.115013 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="extract" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.115043 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="pull" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.115049 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="pull" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.115060 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="util" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.115065 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="util" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.115159 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="d0ca980f-6a69-4719-b7a4-1e4a4e01ecfc" containerName="extract" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.152825 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr"] Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.152973 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.155686 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-s58cw\"" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.158316 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g24hh\" (UniqueName: \"kubernetes.io/projected/60b774a2-0c92-4f09-898c-49a071b55d6f-kube-api-access-g24hh\") pod \"smart-gateway-operator-fddbdb85c-cpbpr\" (UID: \"60b774a2-0c92-4f09-898c-49a071b55d6f\") " pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.158405 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/60b774a2-0c92-4f09-898c-49a071b55d6f-runner\") pod \"smart-gateway-operator-fddbdb85c-cpbpr\" (UID: \"60b774a2-0c92-4f09-898c-49a071b55d6f\") " pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.259376 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g24hh\" (UniqueName: \"kubernetes.io/projected/60b774a2-0c92-4f09-898c-49a071b55d6f-kube-api-access-g24hh\") pod \"smart-gateway-operator-fddbdb85c-cpbpr\" (UID: \"60b774a2-0c92-4f09-898c-49a071b55d6f\") " pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.259747 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/60b774a2-0c92-4f09-898c-49a071b55d6f-runner\") pod \"smart-gateway-operator-fddbdb85c-cpbpr\" (UID: \"60b774a2-0c92-4f09-898c-49a071b55d6f\") " pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.260057 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/60b774a2-0c92-4f09-898c-49a071b55d6f-runner\") pod \"smart-gateway-operator-fddbdb85c-cpbpr\" (UID: \"60b774a2-0c92-4f09-898c-49a071b55d6f\") " pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.278768 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g24hh\" (UniqueName: \"kubernetes.io/projected/60b774a2-0c92-4f09-898c-49a071b55d6f-kube-api-access-g24hh\") pod \"smart-gateway-operator-fddbdb85c-cpbpr\" (UID: \"60b774a2-0c92-4f09-898c-49a071b55d6f\") " pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.471777 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.674307 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr"] Mar 20 00:23:17 crc kubenswrapper[5106]: I0320 00:23:17.816106 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" event={"ID":"60b774a2-0c92-4f09-898c-49a071b55d6f","Type":"ContainerStarted","Data":"844e61735ae8f30e535b62106d5c8d74b35de414e7cffc14ff40bc8afaeb5e8c"} Mar 20 00:23:39 crc kubenswrapper[5106]: I0320 00:23:39.223521 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" event={"ID":"60b774a2-0c92-4f09-898c-49a071b55d6f","Type":"ContainerStarted","Data":"4ace782b4170ed51253398bfd5cd7783cafaf5d00a7b8a0d89b3983e03de46a4"} Mar 20 00:23:39 crc kubenswrapper[5106]: I0320 00:23:39.241438 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-fddbdb85c-cpbpr" podStartSLOduration=1.217318455 podStartE2EDuration="22.241416035s" podCreationTimestamp="2026-03-20 00:23:17 +0000 UTC" firstStartedPulling="2026-03-20 00:23:17.681224109 +0000 UTC m=+852.114958163" lastFinishedPulling="2026-03-20 00:23:38.705321689 +0000 UTC m=+873.139055743" observedRunningTime="2026-03-20 00:23:39.236946407 +0000 UTC m=+873.670680471" watchObservedRunningTime="2026-03-20 00:23:39.241416035 +0000 UTC m=+873.675150129" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.310426 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.317637 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.320121 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.325830 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.488640 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0ad4c63b-788f-48f5-bd28-be8868c91c81-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.488945 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0ad4c63b-788f-48f5-bd28-be8868c91c81-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.489076 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnf97\" (UniqueName: \"kubernetes.io/projected/0ad4c63b-788f-48f5-bd28-be8868c91c81-kube-api-access-wnf97\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.590530 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0ad4c63b-788f-48f5-bd28-be8868c91c81-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.590732 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0ad4c63b-788f-48f5-bd28-be8868c91c81-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.590949 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wnf97\" (UniqueName: \"kubernetes.io/projected/0ad4c63b-788f-48f5-bd28-be8868c91c81-kube-api-access-wnf97\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.591886 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0ad4c63b-788f-48f5-bd28-be8868c91c81-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.592471 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0ad4c63b-788f-48f5-bd28-be8868c91c81-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.610966 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnf97\" (UniqueName: \"kubernetes.io/projected/0ad4c63b-788f-48f5-bd28-be8868c91c81-kube-api-access-wnf97\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"0ad4c63b-788f-48f5-bd28-be8868c91c81\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.634760 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Mar 20 00:23:49 crc kubenswrapper[5106]: I0320 00:23:49.837910 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Mar 20 00:23:50 crc kubenswrapper[5106]: I0320 00:23:50.300166 5106 generic.go:358] "Generic (PLEG): container finished" podID="0ad4c63b-788f-48f5-bd28-be8868c91c81" containerID="757a922a9aae43f99507b517e2c266abc3d472c4bf43faaf722e09c969922839" exitCode=0 Mar 20 00:23:50 crc kubenswrapper[5106]: I0320 00:23:50.300251 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"0ad4c63b-788f-48f5-bd28-be8868c91c81","Type":"ContainerDied","Data":"757a922a9aae43f99507b517e2c266abc3d472c4bf43faaf722e09c969922839"} Mar 20 00:23:50 crc kubenswrapper[5106]: I0320 00:23:50.300539 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"0ad4c63b-788f-48f5-bd28-be8868c91c81","Type":"ContainerStarted","Data":"0c2c3bf2af1ee545bcba8897f94a847fadb1663be619a03da557abae0cbd7d0c"} Mar 20 00:23:51 crc kubenswrapper[5106]: I0320 00:23:51.311881 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"0ad4c63b-788f-48f5-bd28-be8868c91c81","Type":"ContainerStarted","Data":"2a4d457835df2ec0d327af8b70e6ee13ea94ab075e51a693bd96df857d41f4c2"} Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.347520 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.81611382 podStartE2EDuration="3.347375019s" podCreationTimestamp="2026-03-20 00:23:49 +0000 UTC" firstStartedPulling="2026-03-20 00:23:50.300948135 +0000 UTC m=+884.734682189" lastFinishedPulling="2026-03-20 00:23:50.832209314 +0000 UTC m=+885.265943388" observedRunningTime="2026-03-20 00:23:51.333360469 +0000 UTC m=+885.767094523" watchObservedRunningTime="2026-03-20 00:23:52.347375019 +0000 UTC m=+886.781109073" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.349605 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk"] Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.356749 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk"] Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.356882 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.435447 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.435566 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.435615 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8l6b\" (UniqueName: \"kubernetes.io/projected/44b52551-7156-48e9-bbe2-91d38dac73eb-kube-api-access-s8l6b\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.537551 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.537634 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.537661 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8l6b\" (UniqueName: \"kubernetes.io/projected/44b52551-7156-48e9-bbe2-91d38dac73eb-kube-api-access-s8l6b\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.538214 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.538247 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.560689 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8l6b\" (UniqueName: \"kubernetes.io/projected/44b52551-7156-48e9-bbe2-91d38dac73eb-kube-api-access-s8l6b\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.676647 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.881215 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk"] Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.930524 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5"] Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.959995 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5"] Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.960184 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:52 crc kubenswrapper[5106]: I0320 00:23:52.968037 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.046051 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.046254 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.046354 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xpz\" (UniqueName: \"kubernetes.io/projected/7b455fcd-8802-4d40-99b3-6863635bcccd-kube-api-access-q5xpz\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.148199 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.148261 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.148289 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5xpz\" (UniqueName: \"kubernetes.io/projected/7b455fcd-8802-4d40-99b3-6863635bcccd-kube-api-access-q5xpz\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.148738 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.148790 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.170260 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5xpz\" (UniqueName: \"kubernetes.io/projected/7b455fcd-8802-4d40-99b3-6863635bcccd-kube-api-access-q5xpz\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.280022 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.343231 5106 generic.go:358] "Generic (PLEG): container finished" podID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerID="4bd02708b0047fa6849381c8e56998dec6adcad3f863f7939a757b1270f8d1d2" exitCode=0 Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.343288 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" event={"ID":"44b52551-7156-48e9-bbe2-91d38dac73eb","Type":"ContainerDied","Data":"4bd02708b0047fa6849381c8e56998dec6adcad3f863f7939a757b1270f8d1d2"} Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.343339 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" event={"ID":"44b52551-7156-48e9-bbe2-91d38dac73eb","Type":"ContainerStarted","Data":"220f192f11754c598a448548a9ce51a25331e4d73a8f4a7b9eb15e88c4703d43"} Mar 20 00:23:53 crc kubenswrapper[5106]: I0320 00:23:53.466386 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5"] Mar 20 00:23:54 crc kubenswrapper[5106]: I0320 00:23:54.352187 5106 generic.go:358] "Generic (PLEG): container finished" podID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerID="c667bbd0317e6011f8f405cb15aa67e2a5be4027e6674789fe9c9f558b76018f" exitCode=0 Mar 20 00:23:54 crc kubenswrapper[5106]: I0320 00:23:54.352463 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" event={"ID":"7b455fcd-8802-4d40-99b3-6863635bcccd","Type":"ContainerDied","Data":"c667bbd0317e6011f8f405cb15aa67e2a5be4027e6674789fe9c9f558b76018f"} Mar 20 00:23:54 crc kubenswrapper[5106]: I0320 00:23:54.352842 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" event={"ID":"7b455fcd-8802-4d40-99b3-6863635bcccd","Type":"ContainerStarted","Data":"cc70eb70b99f782fa8b9269ff2046a9b19b09bfd086e98e33426b29ca0e5bb58"} Mar 20 00:23:55 crc kubenswrapper[5106]: I0320 00:23:55.360394 5106 generic.go:358] "Generic (PLEG): container finished" podID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerID="e38fe9ea93d051119296a6b7568b3dd30b517926a1df51278bdcf6f44aeb1cdd" exitCode=0 Mar 20 00:23:55 crc kubenswrapper[5106]: I0320 00:23:55.360472 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" event={"ID":"44b52551-7156-48e9-bbe2-91d38dac73eb","Type":"ContainerDied","Data":"e38fe9ea93d051119296a6b7568b3dd30b517926a1df51278bdcf6f44aeb1cdd"} Mar 20 00:23:56 crc kubenswrapper[5106]: I0320 00:23:56.370473 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" event={"ID":"44b52551-7156-48e9-bbe2-91d38dac73eb","Type":"ContainerStarted","Data":"93f8166b8fd9baf124e43fbe17b491a2b24ad22a1974ac39fd12b4230b07319f"} Mar 20 00:23:57 crc kubenswrapper[5106]: I0320 00:23:57.379304 5106 generic.go:358] "Generic (PLEG): container finished" podID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerID="93f8166b8fd9baf124e43fbe17b491a2b24ad22a1974ac39fd12b4230b07319f" exitCode=0 Mar 20 00:23:57 crc kubenswrapper[5106]: I0320 00:23:57.379413 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" event={"ID":"44b52551-7156-48e9-bbe2-91d38dac73eb","Type":"ContainerDied","Data":"93f8166b8fd9baf124e43fbe17b491a2b24ad22a1974ac39fd12b4230b07319f"} Mar 20 00:23:57 crc kubenswrapper[5106]: I0320 00:23:57.381542 5106 generic.go:358] "Generic (PLEG): container finished" podID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerID="e3f818f4f03a92f9c45b559cb62b57f4b4f87e94f5107ac627cdba730d2db426" exitCode=0 Mar 20 00:23:57 crc kubenswrapper[5106]: I0320 00:23:57.381676 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" event={"ID":"7b455fcd-8802-4d40-99b3-6863635bcccd","Type":"ContainerDied","Data":"e3f818f4f03a92f9c45b559cb62b57f4b4f87e94f5107ac627cdba730d2db426"} Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.391364 5106 generic.go:358] "Generic (PLEG): container finished" podID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerID="cd674ff2b3cb3a149632824e655f1e43886f4ef7c7c2ffb9506576ba39759374" exitCode=0 Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.391411 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" event={"ID":"7b455fcd-8802-4d40-99b3-6863635bcccd","Type":"ContainerDied","Data":"cd674ff2b3cb3a149632824e655f1e43886f4ef7c7c2ffb9506576ba39759374"} Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.630239 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.728755 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-bundle\") pod \"44b52551-7156-48e9-bbe2-91d38dac73eb\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.728821 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8l6b\" (UniqueName: \"kubernetes.io/projected/44b52551-7156-48e9-bbe2-91d38dac73eb-kube-api-access-s8l6b\") pod \"44b52551-7156-48e9-bbe2-91d38dac73eb\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.728859 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-util\") pod \"44b52551-7156-48e9-bbe2-91d38dac73eb\" (UID: \"44b52551-7156-48e9-bbe2-91d38dac73eb\") " Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.731355 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-bundle" (OuterVolumeSpecName: "bundle") pod "44b52551-7156-48e9-bbe2-91d38dac73eb" (UID: "44b52551-7156-48e9-bbe2-91d38dac73eb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.735661 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b52551-7156-48e9-bbe2-91d38dac73eb-kube-api-access-s8l6b" (OuterVolumeSpecName: "kube-api-access-s8l6b") pod "44b52551-7156-48e9-bbe2-91d38dac73eb" (UID: "44b52551-7156-48e9-bbe2-91d38dac73eb"). InnerVolumeSpecName "kube-api-access-s8l6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.830238 5106 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:58 crc kubenswrapper[5106]: I0320 00:23:58.830275 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8l6b\" (UniqueName: \"kubernetes.io/projected/44b52551-7156-48e9-bbe2-91d38dac73eb-kube-api-access-s8l6b\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.307121 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-util" (OuterVolumeSpecName: "util") pod "44b52551-7156-48e9-bbe2-91d38dac73eb" (UID: "44b52551-7156-48e9-bbe2-91d38dac73eb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.336910 5106 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44b52551-7156-48e9-bbe2-91d38dac73eb-util\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.400347 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.400349 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572pmmgk" event={"ID":"44b52551-7156-48e9-bbe2-91d38dac73eb","Type":"ContainerDied","Data":"220f192f11754c598a448548a9ce51a25331e4d73a8f4a7b9eb15e88c4703d43"} Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.400462 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220f192f11754c598a448548a9ce51a25331e4d73a8f4a7b9eb15e88c4703d43" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.627860 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.742760 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5xpz\" (UniqueName: \"kubernetes.io/projected/7b455fcd-8802-4d40-99b3-6863635bcccd-kube-api-access-q5xpz\") pod \"7b455fcd-8802-4d40-99b3-6863635bcccd\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.742882 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-bundle\") pod \"7b455fcd-8802-4d40-99b3-6863635bcccd\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.743068 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-util\") pod \"7b455fcd-8802-4d40-99b3-6863635bcccd\" (UID: \"7b455fcd-8802-4d40-99b3-6863635bcccd\") " Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.743532 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-bundle" (OuterVolumeSpecName: "bundle") pod "7b455fcd-8802-4d40-99b3-6863635bcccd" (UID: "7b455fcd-8802-4d40-99b3-6863635bcccd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.752096 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-util" (OuterVolumeSpecName: "util") pod "7b455fcd-8802-4d40-99b3-6863635bcccd" (UID: "7b455fcd-8802-4d40-99b3-6863635bcccd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.753048 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b455fcd-8802-4d40-99b3-6863635bcccd-kube-api-access-q5xpz" (OuterVolumeSpecName: "kube-api-access-q5xpz") pod "7b455fcd-8802-4d40-99b3-6863635bcccd" (UID: "7b455fcd-8802-4d40-99b3-6863635bcccd"). InnerVolumeSpecName "kube-api-access-q5xpz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.844799 5106 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.844835 5106 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7b455fcd-8802-4d40-99b3-6863635bcccd-util\") on node \"crc\" DevicePath \"\"" Mar 20 00:23:59 crc kubenswrapper[5106]: I0320 00:23:59.844846 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5xpz\" (UniqueName: \"kubernetes.io/projected/7b455fcd-8802-4d40-99b3-6863635bcccd-kube-api-access-q5xpz\") on node \"crc\" DevicePath \"\"" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.136979 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566104-7kwwp"] Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138068 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="extract" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138097 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="extract" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138110 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="util" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138118 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="util" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138132 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="util" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138141 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="util" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138150 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="pull" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138160 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="pull" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138203 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="pull" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138211 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="pull" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138224 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="extract" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138231 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="extract" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138356 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="7b455fcd-8802-4d40-99b3-6863635bcccd" containerName="extract" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.138378 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="44b52551-7156-48e9-bbe2-91d38dac73eb" containerName="extract" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.146479 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.147180 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566104-7kwwp"] Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.149992 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.150002 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.152534 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.249979 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qtx5\" (UniqueName: \"kubernetes.io/projected/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c-kube-api-access-2qtx5\") pod \"auto-csr-approver-29566104-7kwwp\" (UID: \"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c\") " pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.351003 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2qtx5\" (UniqueName: \"kubernetes.io/projected/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c-kube-api-access-2qtx5\") pod \"auto-csr-approver-29566104-7kwwp\" (UID: \"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c\") " pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.369428 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qtx5\" (UniqueName: \"kubernetes.io/projected/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c-kube-api-access-2qtx5\") pod \"auto-csr-approver-29566104-7kwwp\" (UID: \"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c\") " pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.408729 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.408734 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5" event={"ID":"7b455fcd-8802-4d40-99b3-6863635bcccd","Type":"ContainerDied","Data":"cc70eb70b99f782fa8b9269ff2046a9b19b09bfd086e98e33426b29ca0e5bb58"} Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.408852 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc70eb70b99f782fa8b9269ff2046a9b19b09bfd086e98e33426b29ca0e5bb58" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.468727 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:00 crc kubenswrapper[5106]: I0320 00:24:00.684283 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566104-7kwwp"] Mar 20 00:24:01 crc kubenswrapper[5106]: I0320 00:24:01.418342 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" event={"ID":"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c","Type":"ContainerStarted","Data":"48f262642a30f0dcb7d1c3bbf23cfa693db4b8673c0ce5721e102a9d7ce81502"} Mar 20 00:24:02 crc kubenswrapper[5106]: I0320 00:24:02.427045 5106 generic.go:358] "Generic (PLEG): container finished" podID="b656fa81-2c43-4fa0-a4af-7f8fe391cc0c" containerID="d512cc822b3bc239e7bdf571d2bf6f3c3909a88bc9416ccbd18400a5b62e194d" exitCode=0 Mar 20 00:24:02 crc kubenswrapper[5106]: I0320 00:24:02.427126 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" event={"ID":"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c","Type":"ContainerDied","Data":"d512cc822b3bc239e7bdf571d2bf6f3c3909a88bc9416ccbd18400a5b62e194d"} Mar 20 00:24:03 crc kubenswrapper[5106]: I0320 00:24:03.680965 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:03 crc kubenswrapper[5106]: I0320 00:24:03.795984 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qtx5\" (UniqueName: \"kubernetes.io/projected/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c-kube-api-access-2qtx5\") pod \"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c\" (UID: \"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c\") " Mar 20 00:24:03 crc kubenswrapper[5106]: I0320 00:24:03.804134 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c-kube-api-access-2qtx5" (OuterVolumeSpecName: "kube-api-access-2qtx5") pod "b656fa81-2c43-4fa0-a4af-7f8fe391cc0c" (UID: "b656fa81-2c43-4fa0-a4af-7f8fe391cc0c"). InnerVolumeSpecName "kube-api-access-2qtx5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:24:03 crc kubenswrapper[5106]: I0320 00:24:03.897958 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2qtx5\" (UniqueName: \"kubernetes.io/projected/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c-kube-api-access-2qtx5\") on node \"crc\" DevicePath \"\"" Mar 20 00:24:04 crc kubenswrapper[5106]: I0320 00:24:04.443274 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" Mar 20 00:24:04 crc kubenswrapper[5106]: I0320 00:24:04.443298 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566104-7kwwp" event={"ID":"b656fa81-2c43-4fa0-a4af-7f8fe391cc0c","Type":"ContainerDied","Data":"48f262642a30f0dcb7d1c3bbf23cfa693db4b8673c0ce5721e102a9d7ce81502"} Mar 20 00:24:04 crc kubenswrapper[5106]: I0320 00:24:04.443627 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f262642a30f0dcb7d1c3bbf23cfa693db4b8673c0ce5721e102a9d7ce81502" Mar 20 00:24:04 crc kubenswrapper[5106]: I0320 00:24:04.738723 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566098-4pwtk"] Mar 20 00:24:04 crc kubenswrapper[5106]: I0320 00:24:04.742661 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566098-4pwtk"] Mar 20 00:24:05 crc kubenswrapper[5106]: I0320 00:24:05.176388 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06fbd90-04f2-40c9-9465-67cb7a1fdda4" path="/var/lib/kubelet/pods/a06fbd90-04f2-40c9-9465-67cb7a1fdda4/volumes" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.443353 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7f569c45b4-gsq27"] Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.444047 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b656fa81-2c43-4fa0-a4af-7f8fe391cc0c" containerName="oc" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.444064 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="b656fa81-2c43-4fa0-a4af-7f8fe391cc0c" containerName="oc" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.444182 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="b656fa81-2c43-4fa0-a4af-7f8fe391cc0c" containerName="oc" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.531106 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7f569c45b4-gsq27"] Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.531325 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.535267 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-299xs\"" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.570897 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.602737 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xtksh_9da3e0a0-f6ab-4f57-925e-c59772b3d6d9/kube-multus/0.log" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.610121 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.626595 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.655448 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xtksh_9da3e0a0-f6ab-4f57-925e-c59772b3d6d9/kube-multus/0.log" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.656229 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fa4b7b99-2abc-493d-8112-ce9b971dbef1-runner\") pod \"service-telemetry-operator-7f569c45b4-gsq27\" (UID: \"fa4b7b99-2abc-493d-8112-ce9b971dbef1\") " pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.656326 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jclt\" (UniqueName: \"kubernetes.io/projected/fa4b7b99-2abc-493d-8112-ce9b971dbef1-kube-api-access-8jclt\") pod \"service-telemetry-operator-7f569c45b4-gsq27\" (UID: \"fa4b7b99-2abc-493d-8112-ce9b971dbef1\") " pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.665126 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.757627 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8jclt\" (UniqueName: \"kubernetes.io/projected/fa4b7b99-2abc-493d-8112-ce9b971dbef1-kube-api-access-8jclt\") pod \"service-telemetry-operator-7f569c45b4-gsq27\" (UID: \"fa4b7b99-2abc-493d-8112-ce9b971dbef1\") " pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.757897 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fa4b7b99-2abc-493d-8112-ce9b971dbef1-runner\") pod \"service-telemetry-operator-7f569c45b4-gsq27\" (UID: \"fa4b7b99-2abc-493d-8112-ce9b971dbef1\") " pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.759080 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fa4b7b99-2abc-493d-8112-ce9b971dbef1-runner\") pod \"service-telemetry-operator-7f569c45b4-gsq27\" (UID: \"fa4b7b99-2abc-493d-8112-ce9b971dbef1\") " pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.780775 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jclt\" (UniqueName: \"kubernetes.io/projected/fa4b7b99-2abc-493d-8112-ce9b971dbef1-kube-api-access-8jclt\") pod \"service-telemetry-operator-7f569c45b4-gsq27\" (UID: \"fa4b7b99-2abc-493d-8112-ce9b971dbef1\") " pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:07 crc kubenswrapper[5106]: I0320 00:24:07.847528 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" Mar 20 00:24:08 crc kubenswrapper[5106]: I0320 00:24:08.095786 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7f569c45b4-gsq27"] Mar 20 00:24:08 crc kubenswrapper[5106]: W0320 00:24:08.127607 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa4b7b99_2abc_493d_8112_ce9b971dbef1.slice/crio-9b858981dbe6b015b5efea726890c52a2b12618b43b02f68284fdee62812a0f0 WatchSource:0}: Error finding container 9b858981dbe6b015b5efea726890c52a2b12618b43b02f68284fdee62812a0f0: Status 404 returned error can't find the container with id 9b858981dbe6b015b5efea726890c52a2b12618b43b02f68284fdee62812a0f0 Mar 20 00:24:08 crc kubenswrapper[5106]: I0320 00:24:08.475034 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" event={"ID":"fa4b7b99-2abc-493d-8112-ce9b971dbef1","Type":"ContainerStarted","Data":"9b858981dbe6b015b5efea726890c52a2b12618b43b02f68284fdee62812a0f0"} Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.011721 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-8slc4"] Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.018723 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-8slc4"] Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.018761 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.021700 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-mhsjt\"" Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.201995 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9spn\" (UniqueName: \"kubernetes.io/projected/e070f157-22cb-4c96-a15e-4605ce0b8a93-kube-api-access-g9spn\") pod \"interconnect-operator-78b9bd8798-8slc4\" (UID: \"e070f157-22cb-4c96-a15e-4605ce0b8a93\") " pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.303629 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9spn\" (UniqueName: \"kubernetes.io/projected/e070f157-22cb-4c96-a15e-4605ce0b8a93-kube-api-access-g9spn\") pod \"interconnect-operator-78b9bd8798-8slc4\" (UID: \"e070f157-22cb-4c96-a15e-4605ce0b8a93\") " pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.327152 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9spn\" (UniqueName: \"kubernetes.io/projected/e070f157-22cb-4c96-a15e-4605ce0b8a93-kube-api-access-g9spn\") pod \"interconnect-operator-78b9bd8798-8slc4\" (UID: \"e070f157-22cb-4c96-a15e-4605ce0b8a93\") " pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.364602 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" Mar 20 00:24:10 crc kubenswrapper[5106]: I0320 00:24:10.581361 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-8slc4"] Mar 20 00:24:10 crc kubenswrapper[5106]: W0320 00:24:10.593888 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode070f157_22cb_4c96_a15e_4605ce0b8a93.slice/crio-2f7ba89798ce690107833a4ef97e35c49332a722cfc0fd94d24abb18c8c38538 WatchSource:0}: Error finding container 2f7ba89798ce690107833a4ef97e35c49332a722cfc0fd94d24abb18c8c38538: Status 404 returned error can't find the container with id 2f7ba89798ce690107833a4ef97e35c49332a722cfc0fd94d24abb18c8c38538 Mar 20 00:24:11 crc kubenswrapper[5106]: I0320 00:24:11.500520 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" event={"ID":"e070f157-22cb-4c96-a15e-4605ce0b8a93","Type":"ContainerStarted","Data":"2f7ba89798ce690107833a4ef97e35c49332a722cfc0fd94d24abb18c8c38538"} Mar 20 00:24:12 crc kubenswrapper[5106]: I0320 00:24:12.747738 5106 scope.go:117] "RemoveContainer" containerID="64e6c72363c74d1fca1e26ec49ee0ff2c3ee760170974a8a7903d02be12ddfd2" Mar 20 00:24:22 crc kubenswrapper[5106]: I0320 00:24:22.584752 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" event={"ID":"fa4b7b99-2abc-493d-8112-ce9b971dbef1","Type":"ContainerStarted","Data":"378e757d8650a63499013b531ec89cb0be21a1e8a173ba7fe08ffa49804f91c8"} Mar 20 00:24:22 crc kubenswrapper[5106]: I0320 00:24:22.586823 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" event={"ID":"e070f157-22cb-4c96-a15e-4605ce0b8a93","Type":"ContainerStarted","Data":"0a796aa429ffc36d4afb131a1c44dcaf16cd821dd5c7a0dfe20a8680681a3ba2"} Mar 20 00:24:22 crc kubenswrapper[5106]: I0320 00:24:22.607900 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7f569c45b4-gsq27" podStartSLOduration=1.885792816 podStartE2EDuration="15.607880358s" podCreationTimestamp="2026-03-20 00:24:07 +0000 UTC" firstStartedPulling="2026-03-20 00:24:08.131269172 +0000 UTC m=+902.565003226" lastFinishedPulling="2026-03-20 00:24:21.853356714 +0000 UTC m=+916.287090768" observedRunningTime="2026-03-20 00:24:22.601982952 +0000 UTC m=+917.035717016" watchObservedRunningTime="2026-03-20 00:24:22.607880358 +0000 UTC m=+917.041614422" Mar 20 00:24:22 crc kubenswrapper[5106]: I0320 00:24:22.623159 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-8slc4" podStartSLOduration=2.394886951 podStartE2EDuration="13.623141766s" podCreationTimestamp="2026-03-20 00:24:09 +0000 UTC" firstStartedPulling="2026-03-20 00:24:10.597167846 +0000 UTC m=+905.030901900" lastFinishedPulling="2026-03-20 00:24:21.825422661 +0000 UTC m=+916.259156715" observedRunningTime="2026-03-20 00:24:22.618349818 +0000 UTC m=+917.052083872" watchObservedRunningTime="2026-03-20 00:24:22.623141766 +0000 UTC m=+917.056875810" Mar 20 00:24:44 crc kubenswrapper[5106]: I0320 00:24:44.869058 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrwnk"] Mar 20 00:24:44 crc kubenswrapper[5106]: I0320 00:24:44.887740 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrwnk"] Mar 20 00:24:44 crc kubenswrapper[5106]: I0320 00:24:44.887862 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.035487 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvl2\" (UniqueName: \"kubernetes.io/projected/4034c289-2147-4417-af83-20bc4d91baeb-kube-api-access-gbvl2\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.035556 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-catalog-content\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.035662 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-utilities\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.136465 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbvl2\" (UniqueName: \"kubernetes.io/projected/4034c289-2147-4417-af83-20bc4d91baeb-kube-api-access-gbvl2\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.136538 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-catalog-content\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.136913 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-utilities\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.137027 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-catalog-content\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.137206 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-utilities\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.161412 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbvl2\" (UniqueName: \"kubernetes.io/projected/4034c289-2147-4417-af83-20bc4d91baeb-kube-api-access-gbvl2\") pod \"community-operators-xrwnk\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.253791 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:45 crc kubenswrapper[5106]: I0320 00:24:45.728546 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrwnk"] Mar 20 00:24:46 crc kubenswrapper[5106]: I0320 00:24:46.406933 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-69dcw"] Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.606818 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.609390 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.616081 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.616291 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-4lqqn\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.616402 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.616598 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.616762 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.616961 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.617681 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-69dcw"] Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.617725 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrwnk" event={"ID":"4034c289-2147-4417-af83-20bc4d91baeb","Type":"ContainerStarted","Data":"477e162299ca9c73fca077713ab4d5c3382090e8bb5da161d00cd11cfc67e6f2"} Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.772337 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.772646 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh6qt\" (UniqueName: \"kubernetes.io/projected/d20ab026-6f4e-4563-af61-88bef726e748-kube-api-access-lh6qt\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.772800 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.772987 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.773145 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.773267 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-sasl-users\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.773706 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d20ab026-6f4e-4563-af61-88bef726e748-sasl-config\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875488 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875559 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875635 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875697 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-sasl-users\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875785 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d20ab026-6f4e-4563-af61-88bef726e748-sasl-config\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875839 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.875859 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lh6qt\" (UniqueName: \"kubernetes.io/projected/d20ab026-6f4e-4563-af61-88bef726e748-kube-api-access-lh6qt\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.877217 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d20ab026-6f4e-4563-af61-88bef726e748-sasl-config\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.882083 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.882353 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.882595 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.896806 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh6qt\" (UniqueName: \"kubernetes.io/projected/d20ab026-6f4e-4563-af61-88bef726e748-kube-api-access-lh6qt\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.898147 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.898596 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-sasl-users\") pod \"default-interconnect-55bf8d5cb-69dcw\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:47 crc kubenswrapper[5106]: I0320 00:24:47.930245 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:24:48 crc kubenswrapper[5106]: I0320 00:24:48.111742 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-69dcw"] Mar 20 00:24:48 crc kubenswrapper[5106]: E0320 00:24:48.750503 5106 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to pull image or OCI artifact: pull image err: copying system image from manifest list: parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:0dc229eb9f2b424eea579c818e2d7ec0585c581c87adca879b3560b7399eecc2; artifact err: provided artifact is a container image" image="registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9" Mar 20 00:24:48 crc kubenswrapper[5106]: E0320 00:24:48.751164 5106 kuberuntime_manager.go:1358] "Unhandled Error" err=< Mar 20 00:24:48 crc kubenswrapper[5106]: container &Container{Name:default-interconnect,Image:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:port-5672,HostPort:0,ContainerPort:5672,Protocol:TCP,HostIP:,},ContainerPort{Name:port-55671,HostPort:0,ContainerPort:55671,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:APPLICATION_NAME,Value:default-interconnect,ValueFrom:nil,},EnvVar{Name:QDROUTERD_CONF,Value: Mar 20 00:24:48 crc kubenswrapper[5106]: router { Mar 20 00:24:48 crc kubenswrapper[5106]: mode: interior Mar 20 00:24:48 crc kubenswrapper[5106]: id: ${HOSTNAME} Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: listener { Mar 20 00:24:48 crc kubenswrapper[5106]: host: 127.0.0.1 Mar 20 00:24:48 crc kubenswrapper[5106]: port: 5672 Mar 20 00:24:48 crc kubenswrapper[5106]: role: normal Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: listener { Mar 20 00:24:48 crc kubenswrapper[5106]: name: health-and-stats Mar 20 00:24:48 crc kubenswrapper[5106]: port: 8888 Mar 20 00:24:48 crc kubenswrapper[5106]: http: true Mar 20 00:24:48 crc kubenswrapper[5106]: healthz: true Mar 20 00:24:48 crc kubenswrapper[5106]: metrics: true Mar 20 00:24:48 crc kubenswrapper[5106]: websockets: false Mar 20 00:24:48 crc kubenswrapper[5106]: httpRootDir: invalid Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: listener { Mar 20 00:24:48 crc kubenswrapper[5106]: role: inter-router Mar 20 00:24:48 crc kubenswrapper[5106]: port: 55671 Mar 20 00:24:48 crc kubenswrapper[5106]: saslMechanisms: EXTERNAL Mar 20 00:24:48 crc kubenswrapper[5106]: authenticatePeer: true Mar 20 00:24:48 crc kubenswrapper[5106]: sslProfile: inter-router Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: listener { Mar 20 00:24:48 crc kubenswrapper[5106]: role: edge Mar 20 00:24:48 crc kubenswrapper[5106]: port: 5671 Mar 20 00:24:48 crc kubenswrapper[5106]: saslMechanisms: PLAIN Mar 20 00:24:48 crc kubenswrapper[5106]: authenticatePeer: true Mar 20 00:24:48 crc kubenswrapper[5106]: sslProfile: openstack Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: listener { Mar 20 00:24:48 crc kubenswrapper[5106]: role: edge Mar 20 00:24:48 crc kubenswrapper[5106]: port: 5673 Mar 20 00:24:48 crc kubenswrapper[5106]: linkCapacity: 25000 Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: sslProfile { Mar 20 00:24:48 crc kubenswrapper[5106]: name: openstack Mar 20 00:24:48 crc kubenswrapper[5106]: certFile: /etc/qpid-dispatch-certs/openstack/default-interconnect-openstack-credentials/tls.crt Mar 20 00:24:48 crc kubenswrapper[5106]: privateKeyFile: /etc/qpid-dispatch-certs/openstack/default-interconnect-openstack-credentials/tls.key Mar 20 00:24:48 crc kubenswrapper[5106]: caCertFile: /etc/qpid-dispatch-certs/openstack/default-interconnect-openstack-ca/tls.crt Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: sslProfile { Mar 20 00:24:48 crc kubenswrapper[5106]: name: inter-router Mar 20 00:24:48 crc kubenswrapper[5106]: certFile: /etc/qpid-dispatch-certs/inter-router/default-interconnect-inter-router-credentials/tls.crt Mar 20 00:24:48 crc kubenswrapper[5106]: privateKeyFile: /etc/qpid-dispatch-certs/inter-router/default-interconnect-inter-router-credentials/tls.key Mar 20 00:24:48 crc kubenswrapper[5106]: caCertFile: /etc/qpid-dispatch-certs/inter-router/default-interconnect-inter-router-ca/tls.crt Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: closest Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: closest Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: multicast Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: multicast Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: unicast Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: closest Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: exclusive Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: closest Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: broadcast Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: multicast Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: collectd Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: multicast Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: address { Mar 20 00:24:48 crc kubenswrapper[5106]: prefix: ceilometer Mar 20 00:24:48 crc kubenswrapper[5106]: distribution: multicast Mar 20 00:24:48 crc kubenswrapper[5106]: } Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: Mar 20 00:24:48 crc kubenswrapper[5106]: ,ValueFrom:nil,},EnvVar{Name:QDROUTERD_AUTO_CREATE_SASLDB_SOURCE,Value:/etc/qpid-dispatch/sasl-users/,ValueFrom:nil,},EnvVar{Name:QDROUTERD_AUTO_CREATE_SASLDB_PATH,Value:/tmp/qdrouterd.sasldb,ValueFrom:nil,},EnvVar{Name:POD_COUNT,Value:1,ValueFrom:nil,},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:QDROUTERD_AUTO_MESH_DISCOVERY,Value:QUERY,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-interconnect-openstack-credentials,ReadOnly:false,MountPath:/etc/qpid-dispatch-certs/openstack/default-interconnect-openstack-credentials,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:default-interconnect-openstack-ca,ReadOnly:false,MountPath:/etc/qpid-dispatch-certs/openstack/default-interconnect-openstack-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:default-interconnect-inter-router-credentials,ReadOnly:false,MountPath:/etc/qpid-dispatch-certs/inter-router/default-interconnect-inter-router-credentials,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:default-interconnect-inter-router-ca,ReadOnly:false,MountPath:/etc/qpid-dispatch-certs/inter-router/default-interconnect-inter-router-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sasl-users,ReadOnly:false,MountPath:/etc/qpid-dispatch/sasl-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sasl-config,ReadOnly:false,MountPath:/etc/sasl2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh6qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8888 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-interconnect-55bf8d5cb-69dcw_service-telemetry(d20ab026-6f4e-4563-af61-88bef726e748): ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:0dc229eb9f2b424eea579c818e2d7ec0585c581c87adca879b3560b7399eecc2; artifact err: provided artifact is a container image Mar 20 00:24:48 crc kubenswrapper[5106]: > logger="UnhandledError" Mar 20 00:24:48 crc kubenswrapper[5106]: E0320 00:24:48.752310 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"default-interconnect\" with ErrImagePull: \"unable to pull image or OCI artifact: pull image err: copying system image from manifest list: parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:0dc229eb9f2b424eea579c818e2d7ec0585c581c87adca879b3560b7399eecc2; artifact err: provided artifact is a container image\"" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" podUID="d20ab026-6f4e-4563-af61-88bef726e748" Mar 20 00:24:48 crc kubenswrapper[5106]: I0320 00:24:48.762498 5106 generic.go:358] "Generic (PLEG): container finished" podID="4034c289-2147-4417-af83-20bc4d91baeb" containerID="6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b" exitCode=0 Mar 20 00:24:48 crc kubenswrapper[5106]: I0320 00:24:48.762643 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrwnk" event={"ID":"4034c289-2147-4417-af83-20bc4d91baeb","Type":"ContainerDied","Data":"6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b"} Mar 20 00:24:48 crc kubenswrapper[5106]: I0320 00:24:48.764020 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" event={"ID":"d20ab026-6f4e-4563-af61-88bef726e748","Type":"ContainerStarted","Data":"8228c477c543a2ffe268a1821e323d61c33143af24b632623bae1d77a11288a2"} Mar 20 00:24:48 crc kubenswrapper[5106]: E0320 00:24:48.766484 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"default-interconnect\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:0dc229eb9f2b424eea579c818e2d7ec0585c581c87adca879b3560b7399eecc2; artifact err: provided artifact is a container image\"" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" podUID="d20ab026-6f4e-4563-af61-88bef726e748" Mar 20 00:24:49 crc kubenswrapper[5106]: E0320 00:24:49.773978 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"default-interconnect\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9\\\": ErrImagePull: unable to pull image or OCI artifact: pull image err: copying system image from manifest list: parsing image configuration: Download config.json digest sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 does not match expected sha256:0dc229eb9f2b424eea579c818e2d7ec0585c581c87adca879b3560b7399eecc2; artifact err: provided artifact is a container image\"" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" podUID="d20ab026-6f4e-4563-af61-88bef726e748" Mar 20 00:24:50 crc kubenswrapper[5106]: I0320 00:24:50.787746 5106 generic.go:358] "Generic (PLEG): container finished" podID="4034c289-2147-4417-af83-20bc4d91baeb" containerID="8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d" exitCode=0 Mar 20 00:24:50 crc kubenswrapper[5106]: I0320 00:24:50.787828 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrwnk" event={"ID":"4034c289-2147-4417-af83-20bc4d91baeb","Type":"ContainerDied","Data":"8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d"} Mar 20 00:24:51 crc kubenswrapper[5106]: I0320 00:24:51.797476 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrwnk" event={"ID":"4034c289-2147-4417-af83-20bc4d91baeb","Type":"ContainerStarted","Data":"adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2"} Mar 20 00:24:55 crc kubenswrapper[5106]: I0320 00:24:55.254120 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:55 crc kubenswrapper[5106]: I0320 00:24:55.255495 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:55 crc kubenswrapper[5106]: I0320 00:24:55.314354 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:55 crc kubenswrapper[5106]: I0320 00:24:55.337333 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrwnk" podStartSLOduration=10.41049359 podStartE2EDuration="11.337310775s" podCreationTimestamp="2026-03-20 00:24:44 +0000 UTC" firstStartedPulling="2026-03-20 00:24:48.775246955 +0000 UTC m=+943.208981059" lastFinishedPulling="2026-03-20 00:24:49.70206418 +0000 UTC m=+944.135798244" observedRunningTime="2026-03-20 00:24:51.825875708 +0000 UTC m=+946.259609762" watchObservedRunningTime="2026-03-20 00:24:55.337310775 +0000 UTC m=+949.771044839" Mar 20 00:24:55 crc kubenswrapper[5106]: I0320 00:24:55.885901 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:55 crc kubenswrapper[5106]: I0320 00:24:55.930208 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrwnk"] Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.563569 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.598334 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.598556 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.600650 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.602496 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.602599 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.602519 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.602658 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.602838 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.604137 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.604167 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.604568 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-8jr66\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.606938 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718126 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f52fc685-1493-412b-8948-8bc1798518f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f52fc685-1493-412b-8948-8bc1798518f8\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718172 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/353db55f-dddd-44dc-aade-e75b5d1783e7-config-out\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718201 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-config\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718226 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718249 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718354 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718384 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk9gn\" (UniqueName: \"kubernetes.io/projected/353db55f-dddd-44dc-aade-e75b5d1783e7-kube-api-access-bk9gn\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718561 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718650 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718689 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/353db55f-dddd-44dc-aade-e75b5d1783e7-tls-assets\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718712 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-web-config\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.718783 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.820328 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.820371 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.820394 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/353db55f-dddd-44dc-aade-e75b5d1783e7-tls-assets\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.820414 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-web-config\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.820433 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821227 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821275 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821331 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-f52fc685-1493-412b-8948-8bc1798518f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f52fc685-1493-412b-8948-8bc1798518f8\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821396 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/353db55f-dddd-44dc-aade-e75b5d1783e7-config-out\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821469 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-config\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821512 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821554 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821637 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.821660 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bk9gn\" (UniqueName: \"kubernetes.io/projected/353db55f-dddd-44dc-aade-e75b5d1783e7-kube-api-access-bk9gn\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.822207 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: E0320 00:24:57.822303 5106 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Mar 20 00:24:57 crc kubenswrapper[5106]: E0320 00:24:57.822354 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls podName:353db55f-dddd-44dc-aade-e75b5d1783e7 nodeName:}" failed. No retries permitted until 2026-03-20 00:24:58.322337907 +0000 UTC m=+952.756071951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "353db55f-dddd-44dc-aade-e75b5d1783e7") : secret "default-prometheus-proxy-tls" not found Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.823034 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/353db55f-dddd-44dc-aade-e75b5d1783e7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.825979 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/353db55f-dddd-44dc-aade-e75b5d1783e7-config-out\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.827157 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/353db55f-dddd-44dc-aade-e75b5d1783e7-tls-assets\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.827680 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.828716 5106 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.828759 5106 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-f52fc685-1493-412b-8948-8bc1798518f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f52fc685-1493-412b-8948-8bc1798518f8\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/207987edddd61f47e066abc8ff5e2c19fdef344559dc496904b45b08a735d4a6/globalmount\"" pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.828991 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-config\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.829698 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-web-config\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.840266 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk9gn\" (UniqueName: \"kubernetes.io/projected/353db55f-dddd-44dc-aade-e75b5d1783e7-kube-api-access-bk9gn\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.843642 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrwnk" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="registry-server" containerID="cri-o://adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2" gracePeriod=2 Mar 20 00:24:57 crc kubenswrapper[5106]: I0320 00:24:57.861993 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-f52fc685-1493-412b-8948-8bc1798518f8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f52fc685-1493-412b-8948-8bc1798518f8\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.228646 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.327873 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbvl2\" (UniqueName: \"kubernetes.io/projected/4034c289-2147-4417-af83-20bc4d91baeb-kube-api-access-gbvl2\") pod \"4034c289-2147-4417-af83-20bc4d91baeb\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.327933 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-utilities\") pod \"4034c289-2147-4417-af83-20bc4d91baeb\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.328060 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-catalog-content\") pod \"4034c289-2147-4417-af83-20bc4d91baeb\" (UID: \"4034c289-2147-4417-af83-20bc4d91baeb\") " Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.328316 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:58 crc kubenswrapper[5106]: E0320 00:24:58.328515 5106 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Mar 20 00:24:58 crc kubenswrapper[5106]: E0320 00:24:58.328663 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls podName:353db55f-dddd-44dc-aade-e75b5d1783e7 nodeName:}" failed. No retries permitted until 2026-03-20 00:24:59.328635288 +0000 UTC m=+953.762369352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "353db55f-dddd-44dc-aade-e75b5d1783e7") : secret "default-prometheus-proxy-tls" not found Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.330161 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-utilities" (OuterVolumeSpecName: "utilities") pod "4034c289-2147-4417-af83-20bc4d91baeb" (UID: "4034c289-2147-4417-af83-20bc4d91baeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.333067 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4034c289-2147-4417-af83-20bc4d91baeb-kube-api-access-gbvl2" (OuterVolumeSpecName: "kube-api-access-gbvl2") pod "4034c289-2147-4417-af83-20bc4d91baeb" (UID: "4034c289-2147-4417-af83-20bc4d91baeb"). InnerVolumeSpecName "kube-api-access-gbvl2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.382278 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4034c289-2147-4417-af83-20bc4d91baeb" (UID: "4034c289-2147-4417-af83-20bc4d91baeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.429796 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.429836 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gbvl2\" (UniqueName: \"kubernetes.io/projected/4034c289-2147-4417-af83-20bc4d91baeb-kube-api-access-gbvl2\") on node \"crc\" DevicePath \"\"" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.429858 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4034c289-2147-4417-af83-20bc4d91baeb-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.857564 5106 generic.go:358] "Generic (PLEG): container finished" podID="4034c289-2147-4417-af83-20bc4d91baeb" containerID="adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2" exitCode=0 Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.857638 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrwnk" event={"ID":"4034c289-2147-4417-af83-20bc4d91baeb","Type":"ContainerDied","Data":"adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2"} Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.858088 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrwnk" event={"ID":"4034c289-2147-4417-af83-20bc4d91baeb","Type":"ContainerDied","Data":"477e162299ca9c73fca077713ab4d5c3382090e8bb5da161d00cd11cfc67e6f2"} Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.857678 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrwnk" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.858116 5106 scope.go:117] "RemoveContainer" containerID="adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.875312 5106 scope.go:117] "RemoveContainer" containerID="8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.896642 5106 scope.go:117] "RemoveContainer" containerID="6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.900739 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrwnk"] Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.906229 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrwnk"] Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.917555 5106 scope.go:117] "RemoveContainer" containerID="adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2" Mar 20 00:24:58 crc kubenswrapper[5106]: E0320 00:24:58.919111 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2\": container with ID starting with adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2 not found: ID does not exist" containerID="adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.919145 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2"} err="failed to get container status \"adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2\": rpc error: code = NotFound desc = could not find container \"adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2\": container with ID starting with adde945a9342c8b452895ae66a08c539219a5080959dd401c7303ba50821c8b2 not found: ID does not exist" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.919174 5106 scope.go:117] "RemoveContainer" containerID="8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d" Mar 20 00:24:58 crc kubenswrapper[5106]: E0320 00:24:58.919519 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d\": container with ID starting with 8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d not found: ID does not exist" containerID="8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.919643 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d"} err="failed to get container status \"8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d\": rpc error: code = NotFound desc = could not find container \"8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d\": container with ID starting with 8a417b7223278e6c8d79b85958fc9efabcccdd28ffaa63ea32195f0172d3a09d not found: ID does not exist" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.919756 5106 scope.go:117] "RemoveContainer" containerID="6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b" Mar 20 00:24:58 crc kubenswrapper[5106]: E0320 00:24:58.920150 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b\": container with ID starting with 6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b not found: ID does not exist" containerID="6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b" Mar 20 00:24:58 crc kubenswrapper[5106]: I0320 00:24:58.920175 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b"} err="failed to get container status \"6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b\": rpc error: code = NotFound desc = could not find container \"6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b\": container with ID starting with 6b2cefc5a4b36de0e75481fb891e0486229f6255ea3dcc1de2f8f781cfa09a0b not found: ID does not exist" Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.174144 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4034c289-2147-4417-af83-20bc4d91baeb" path="/var/lib/kubelet/pods/4034c289-2147-4417-af83-20bc4d91baeb/volumes" Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.343280 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.348843 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/353db55f-dddd-44dc-aade-e75b5d1783e7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"353db55f-dddd-44dc-aade-e75b5d1783e7\") " pod="service-telemetry/prometheus-default-0" Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.418293 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.648382 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Mar 20 00:24:59 crc kubenswrapper[5106]: W0320 00:24:59.653175 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod353db55f_dddd_44dc_aade_e75b5d1783e7.slice/crio-4b3851d82b3748d9ffa78bee0b3bb67439d4f28b4ccfeae4b65d739042dda0b8 WatchSource:0}: Error finding container 4b3851d82b3748d9ffa78bee0b3bb67439d4f28b4ccfeae4b65d739042dda0b8: Status 404 returned error can't find the container with id 4b3851d82b3748d9ffa78bee0b3bb67439d4f28b4ccfeae4b65d739042dda0b8 Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.658993 5106 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 00:24:59 crc kubenswrapper[5106]: I0320 00:24:59.866604 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"353db55f-dddd-44dc-aade-e75b5d1783e7","Type":"ContainerStarted","Data":"4b3851d82b3748d9ffa78bee0b3bb67439d4f28b4ccfeae4b65d739042dda0b8"} Mar 20 00:25:04 crc kubenswrapper[5106]: I0320 00:25:04.910095 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"353db55f-dddd-44dc-aade-e75b5d1783e7","Type":"ContainerStarted","Data":"01f4b33b8c32553e29de6659b26a642b6a756c06cb9c4d29e31bc2195170de3d"} Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.806862 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm"] Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809446 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="registry-server" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809475 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="registry-server" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809521 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="extract-utilities" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809530 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="extract-utilities" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809556 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="extract-content" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809566 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="extract-content" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.809722 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="4034c289-2147-4417-af83-20bc4d91baeb" containerName="registry-server" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.816394 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.825621 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm"] Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.916832 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hz5r\" (UniqueName: \"kubernetes.io/projected/345c517e-a922-4bff-b8a1-cf4f6b8e08c3-kube-api-access-5hz5r\") pod \"default-snmp-webhook-6774d8dfbc-ssxqm\" (UID: \"345c517e-a922-4bff-b8a1-cf4f6b8e08c3\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.942824 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" event={"ID":"d20ab026-6f4e-4563-af61-88bef726e748","Type":"ContainerStarted","Data":"4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5"} Mar 20 00:25:08 crc kubenswrapper[5106]: I0320 00:25:08.963825 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" podStartSLOduration=2.899342051 podStartE2EDuration="22.963804369s" podCreationTimestamp="2026-03-20 00:24:46 +0000 UTC" firstStartedPulling="2026-03-20 00:24:48.131523057 +0000 UTC m=+942.565257111" lastFinishedPulling="2026-03-20 00:25:08.195985375 +0000 UTC m=+962.629719429" observedRunningTime="2026-03-20 00:25:08.960969649 +0000 UTC m=+963.394703693" watchObservedRunningTime="2026-03-20 00:25:08.963804369 +0000 UTC m=+963.397538423" Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.018575 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5hz5r\" (UniqueName: \"kubernetes.io/projected/345c517e-a922-4bff-b8a1-cf4f6b8e08c3-kube-api-access-5hz5r\") pod \"default-snmp-webhook-6774d8dfbc-ssxqm\" (UID: \"345c517e-a922-4bff-b8a1-cf4f6b8e08c3\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.043634 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hz5r\" (UniqueName: \"kubernetes.io/projected/345c517e-a922-4bff-b8a1-cf4f6b8e08c3-kube-api-access-5hz5r\") pod \"default-snmp-webhook-6774d8dfbc-ssxqm\" (UID: \"345c517e-a922-4bff-b8a1-cf4f6b8e08c3\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.138520 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.618139 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm"] Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.950615 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" event={"ID":"345c517e-a922-4bff-b8a1-cf4f6b8e08c3","Type":"ContainerStarted","Data":"b29fd296ba33362615a6b46b2d2f05aa8955434c04c44745cd4fbfe8d794630e"} Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.952003 5106 generic.go:358] "Generic (PLEG): container finished" podID="353db55f-dddd-44dc-aade-e75b5d1783e7" containerID="01f4b33b8c32553e29de6659b26a642b6a756c06cb9c4d29e31bc2195170de3d" exitCode=0 Mar 20 00:25:09 crc kubenswrapper[5106]: I0320 00:25:09.952082 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"353db55f-dddd-44dc-aade-e75b5d1783e7","Type":"ContainerDied","Data":"01f4b33b8c32553e29de6659b26a642b6a756c06cb9c4d29e31bc2195170de3d"} Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.274513 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.289361 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.289703 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.293124 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.293182 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.293183 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.293378 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-rcft5\"" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.293686 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.295491 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.386450 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.386495 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqzxn\" (UniqueName: \"kubernetes.io/projected/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-kube-api-access-dqzxn\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.386522 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-config-volume\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.386665 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.386794 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-config-out\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.386948 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.387019 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-web-config\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.387083 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.387110 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488297 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-config-volume\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488347 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488379 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-config-out\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488425 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: E0320 00:25:12.488537 5106 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:12 crc kubenswrapper[5106]: E0320 00:25:12.488609 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls podName:6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b nodeName:}" failed. No retries permitted until 2026-03-20 00:25:12.988592017 +0000 UTC m=+967.422326071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b") : secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488927 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-web-config\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488970 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.488990 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.489044 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.489068 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqzxn\" (UniqueName: \"kubernetes.io/projected/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-kube-api-access-dqzxn\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.495135 5106 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.495402 5106 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1ef55c7b2a5b64f1ed5ca4b5e6bedd1dea9dbbd3af374a441567bb4db93a96da/globalmount\"" pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.497237 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-tls-assets\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.497968 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-web-config\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.498016 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.504847 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-config-volume\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.507860 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-config-out\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.516819 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqzxn\" (UniqueName: \"kubernetes.io/projected/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-kube-api-access-dqzxn\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.520937 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.543891 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2383b84a-b64b-42d9-8230-8169d8618b8c\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: I0320 00:25:12.997000 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:12 crc kubenswrapper[5106]: E0320 00:25:12.997175 5106 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:12 crc kubenswrapper[5106]: E0320 00:25:12.997261 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls podName:6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b nodeName:}" failed. No retries permitted until 2026-03-20 00:25:13.997239736 +0000 UTC m=+968.430973790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b") : secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:14 crc kubenswrapper[5106]: I0320 00:25:14.015666 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:14 crc kubenswrapper[5106]: E0320 00:25:14.015929 5106 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:14 crc kubenswrapper[5106]: E0320 00:25:14.016043 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls podName:6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b nodeName:}" failed. No retries permitted until 2026-03-20 00:25:16.016018092 +0000 UTC m=+970.449752136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b") : secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:16 crc kubenswrapper[5106]: I0320 00:25:16.047049 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:16 crc kubenswrapper[5106]: E0320 00:25:16.047415 5106 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:16 crc kubenswrapper[5106]: E0320 00:25:16.047624 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls podName:6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b nodeName:}" failed. No retries permitted until 2026-03-20 00:25:20.047557386 +0000 UTC m=+974.481291480 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b") : secret "default-alertmanager-proxy-tls" not found Mar 20 00:25:20 crc kubenswrapper[5106]: I0320 00:25:20.121514 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:20 crc kubenswrapper[5106]: I0320 00:25:20.130510 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b\") " pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:20 crc kubenswrapper[5106]: I0320 00:25:20.151098 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Mar 20 00:25:23 crc kubenswrapper[5106]: I0320 00:25:23.216328 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Mar 20 00:25:23 crc kubenswrapper[5106]: W0320 00:25:23.325719 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f6ee6c2_a774_4bd3_91f4_bcccfb10e93b.slice/crio-a44aa835e87e51d64ef2da878da92003b56de082484ede5d9d87cf4b73e2514e WatchSource:0}: Error finding container a44aa835e87e51d64ef2da878da92003b56de082484ede5d9d87cf4b73e2514e: Status 404 returned error can't find the container with id a44aa835e87e51d64ef2da878da92003b56de082484ede5d9d87cf4b73e2514e Mar 20 00:25:24 crc kubenswrapper[5106]: I0320 00:25:24.060737 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b","Type":"ContainerStarted","Data":"a44aa835e87e51d64ef2da878da92003b56de082484ede5d9d87cf4b73e2514e"} Mar 20 00:25:24 crc kubenswrapper[5106]: I0320 00:25:24.062461 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" event={"ID":"345c517e-a922-4bff-b8a1-cf4f6b8e08c3","Type":"ContainerStarted","Data":"abbb5619419c90aa1c9a29c25d1d2b2840e821c4c2eca4cb4b42929a3f391330"} Mar 20 00:25:24 crc kubenswrapper[5106]: I0320 00:25:24.064704 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"353db55f-dddd-44dc-aade-e75b5d1783e7","Type":"ContainerStarted","Data":"dedc0c013d5f3d7b2dc82ccbc941aa3d3f6c5430f27af17f7b0b056ea33000d8"} Mar 20 00:25:24 crc kubenswrapper[5106]: I0320 00:25:24.080676 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-ssxqm" podStartSLOduration=2.542277317 podStartE2EDuration="16.080647058s" podCreationTimestamp="2026-03-20 00:25:08 +0000 UTC" firstStartedPulling="2026-03-20 00:25:09.6209484 +0000 UTC m=+964.054682454" lastFinishedPulling="2026-03-20 00:25:23.159318141 +0000 UTC m=+977.593052195" observedRunningTime="2026-03-20 00:25:24.07671385 +0000 UTC m=+978.510447914" watchObservedRunningTime="2026-03-20 00:25:24.080647058 +0000 UTC m=+978.514381112" Mar 20 00:25:25 crc kubenswrapper[5106]: I0320 00:25:25.372911 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:25:25 crc kubenswrapper[5106]: I0320 00:25:25.372995 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:25:26 crc kubenswrapper[5106]: I0320 00:25:26.081731 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"353db55f-dddd-44dc-aade-e75b5d1783e7","Type":"ContainerStarted","Data":"622e3e83df46e1625641811d450c6d51ee0c696ef3c75e337124fe74b7a32b14"} Mar 20 00:25:26 crc kubenswrapper[5106]: I0320 00:25:26.097824 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b","Type":"ContainerStarted","Data":"235aa03e5ceca3dcae06fe231ee1795b4df0f5dc0864a4a7ead76a1ff7bbd383"} Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.129486 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq"] Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.145013 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq"] Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.145192 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.149997 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.150363 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-vzjwn\"" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.150590 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.151165 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.187555 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.187615 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9e07806a-4738-42af-b42b-44ab2fc88123-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.187635 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4rt\" (UniqueName: \"kubernetes.io/projected/9e07806a-4738-42af-b42b-44ab2fc88123-kube-api-access-cv4rt\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.187704 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.187734 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e07806a-4738-42af-b42b-44ab2fc88123-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.289453 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e07806a-4738-42af-b42b-44ab2fc88123-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.289533 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.289556 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9e07806a-4738-42af-b42b-44ab2fc88123-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.289592 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4rt\" (UniqueName: \"kubernetes.io/projected/9e07806a-4738-42af-b42b-44ab2fc88123-kube-api-access-cv4rt\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.289661 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: E0320 00:25:30.289751 5106 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Mar 20 00:25:30 crc kubenswrapper[5106]: E0320 00:25:30.289838 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls podName:9e07806a-4738-42af-b42b-44ab2fc88123 nodeName:}" failed. No retries permitted until 2026-03-20 00:25:30.789818972 +0000 UTC m=+985.223553026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" (UID: "9e07806a-4738-42af-b42b-44ab2fc88123") : secret "default-cloud1-coll-meter-proxy-tls" not found Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.290009 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9e07806a-4738-42af-b42b-44ab2fc88123-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.290490 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9e07806a-4738-42af-b42b-44ab2fc88123-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.306205 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.307758 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4rt\" (UniqueName: \"kubernetes.io/projected/9e07806a-4738-42af-b42b-44ab2fc88123-kube-api-access-cv4rt\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: I0320 00:25:30.796882 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:30 crc kubenswrapper[5106]: E0320 00:25:30.797054 5106 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Mar 20 00:25:30 crc kubenswrapper[5106]: E0320 00:25:30.797259 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls podName:9e07806a-4738-42af-b42b-44ab2fc88123 nodeName:}" failed. No retries permitted until 2026-03-20 00:25:31.797243234 +0000 UTC m=+986.230977288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" (UID: "9e07806a-4738-42af-b42b-44ab2fc88123") : secret "default-cloud1-coll-meter-proxy-tls" not found Mar 20 00:25:31 crc kubenswrapper[5106]: I0320 00:25:31.974387 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:31 crc kubenswrapper[5106]: I0320 00:25:31.989694 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9e07806a-4738-42af-b42b-44ab2fc88123-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-7zcbq\" (UID: \"9e07806a-4738-42af-b42b-44ab2fc88123\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:32 crc kubenswrapper[5106]: I0320 00:25:32.172018 5106 generic.go:358] "Generic (PLEG): container finished" podID="6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b" containerID="235aa03e5ceca3dcae06fe231ee1795b4df0f5dc0864a4a7ead76a1ff7bbd383" exitCode=0 Mar 20 00:25:32 crc kubenswrapper[5106]: I0320 00:25:32.172433 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b","Type":"ContainerDied","Data":"235aa03e5ceca3dcae06fe231ee1795b4df0f5dc0864a4a7ead76a1ff7bbd383"} Mar 20 00:25:32 crc kubenswrapper[5106]: I0320 00:25:32.281406 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" Mar 20 00:25:32 crc kubenswrapper[5106]: I0320 00:25:32.495186 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq"] Mar 20 00:25:33 crc kubenswrapper[5106]: I0320 00:25:33.178968 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" event={"ID":"9e07806a-4738-42af-b42b-44ab2fc88123","Type":"ContainerStarted","Data":"138deb96d0035906b3150434ce5c496aeeeba6f931887b8b8993d9d3aba468ff"} Mar 20 00:25:33 crc kubenswrapper[5106]: I0320 00:25:33.720243 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9"] Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.270689 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.277092 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.278109 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.286624 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9"] Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.327541 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/71b5b240-4732-4fcf-9100-675ccacc62e0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.327656 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71b5b240-4732-4fcf-9100-675ccacc62e0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.327710 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5xn\" (UniqueName: \"kubernetes.io/projected/71b5b240-4732-4fcf-9100-675ccacc62e0-kube-api-access-qr5xn\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.327738 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/71b5b240-4732-4fcf-9100-675ccacc62e0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.327976 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/71b5b240-4732-4fcf-9100-675ccacc62e0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.429633 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71b5b240-4732-4fcf-9100-675ccacc62e0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.429715 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qr5xn\" (UniqueName: \"kubernetes.io/projected/71b5b240-4732-4fcf-9100-675ccacc62e0-kube-api-access-qr5xn\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.429750 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/71b5b240-4732-4fcf-9100-675ccacc62e0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.429816 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/71b5b240-4732-4fcf-9100-675ccacc62e0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.430477 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/71b5b240-4732-4fcf-9100-675ccacc62e0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.430897 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/71b5b240-4732-4fcf-9100-675ccacc62e0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.431026 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/71b5b240-4732-4fcf-9100-675ccacc62e0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.436130 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/71b5b240-4732-4fcf-9100-675ccacc62e0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.442653 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/71b5b240-4732-4fcf-9100-675ccacc62e0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.452915 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr5xn\" (UniqueName: \"kubernetes.io/projected/71b5b240-4732-4fcf-9100-675ccacc62e0-kube-api-access-qr5xn\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9\" (UID: \"71b5b240-4732-4fcf-9100-675ccacc62e0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:35 crc kubenswrapper[5106]: I0320 00:25:35.594646 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" Mar 20 00:25:36 crc kubenswrapper[5106]: I0320 00:25:36.023642 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9"] Mar 20 00:25:36 crc kubenswrapper[5106]: I0320 00:25:36.204026 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" event={"ID":"71b5b240-4732-4fcf-9100-675ccacc62e0","Type":"ContainerStarted","Data":"d9ed3f81aac14d95c774f628771770c98a926ee99b8b8e20c920c47838c5fb26"} Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.115915 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz"] Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.416288 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz"] Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.416365 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.418983 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.419032 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.494996 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.495052 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.495086 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qn56\" (UniqueName: \"kubernetes.io/projected/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-kube-api-access-9qn56\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.495113 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.495181 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.596875 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.596952 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.596986 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.597015 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9qn56\" (UniqueName: \"kubernetes.io/projected/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-kube-api-access-9qn56\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.597039 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: E0320 00:25:38.597522 5106 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Mar 20 00:25:38 crc kubenswrapper[5106]: E0320 00:25:38.597662 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls podName:a4dc4e0a-b766-4d09-ab7f-18dd765339bf nodeName:}" failed. No retries permitted until 2026-03-20 00:25:39.097627231 +0000 UTC m=+993.531361285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" (UID: "a4dc4e0a-b766-4d09-ab7f-18dd765339bf") : secret "default-cloud1-sens-meter-proxy-tls" not found Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.597905 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.598877 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.604256 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:38 crc kubenswrapper[5106]: I0320 00:25:38.616156 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qn56\" (UniqueName: \"kubernetes.io/projected/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-kube-api-access-9qn56\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:39 crc kubenswrapper[5106]: I0320 00:25:39.106205 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:39 crc kubenswrapper[5106]: E0320 00:25:39.106411 5106 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Mar 20 00:25:39 crc kubenswrapper[5106]: E0320 00:25:39.107145 5106 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls podName:a4dc4e0a-b766-4d09-ab7f-18dd765339bf nodeName:}" failed. No retries permitted until 2026-03-20 00:25:40.107123474 +0000 UTC m=+994.540857528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" (UID: "a4dc4e0a-b766-4d09-ab7f-18dd765339bf") : secret "default-cloud1-sens-meter-proxy-tls" not found Mar 20 00:25:40 crc kubenswrapper[5106]: I0320 00:25:40.122028 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:40 crc kubenswrapper[5106]: I0320 00:25:40.129047 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4dc4e0a-b766-4d09-ab7f-18dd765339bf-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz\" (UID: \"a4dc4e0a-b766-4d09-ab7f-18dd765339bf\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:40 crc kubenswrapper[5106]: I0320 00:25:40.246913 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" Mar 20 00:25:42 crc kubenswrapper[5106]: I0320 00:25:42.386921 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz"] Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.294717 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b","Type":"ContainerStarted","Data":"cc19d2a819ad1ea241ae47a43793746ea5681e488a9acf755bee085fb95b9e8e"} Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.297209 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerStarted","Data":"22dc5a02fda375ebc78bbaaff2d117300ccc0e20a04cdff045662bca0fb53937"} Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.297272 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerStarted","Data":"287f36043fabf3c6d965a706f6117890f912bea9aff3e812680eb877841c36ea"} Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.301105 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" event={"ID":"9e07806a-4738-42af-b42b-44ab2fc88123","Type":"ContainerStarted","Data":"cb4a932905b78eb7e12da93c9e946e6c0236038dba0b48670afb9286e17c2fdc"} Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.302693 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" event={"ID":"71b5b240-4732-4fcf-9100-675ccacc62e0","Type":"ContainerStarted","Data":"93b61e38df5a14bd2697cfa133a9e080856ddeba7e719771a840b58c35fb975e"} Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.305284 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"353db55f-dddd-44dc-aade-e75b5d1783e7","Type":"ContainerStarted","Data":"5788c6fb8235efe3471e14657a9d38300dc0312807e63e3877d1f09b879815ef"} Mar 20 00:25:43 crc kubenswrapper[5106]: I0320 00:25:43.335113 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.719175273 podStartE2EDuration="47.335098133s" podCreationTimestamp="2026-03-20 00:24:56 +0000 UTC" firstStartedPulling="2026-03-20 00:24:59.659379807 +0000 UTC m=+954.093113861" lastFinishedPulling="2026-03-20 00:25:42.275302667 +0000 UTC m=+996.709036721" observedRunningTime="2026-03-20 00:25:43.334479718 +0000 UTC m=+997.768213772" watchObservedRunningTime="2026-03-20 00:25:43.335098133 +0000 UTC m=+997.768832187" Mar 20 00:25:44 crc kubenswrapper[5106]: I0320 00:25:44.419497 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Mar 20 00:25:44 crc kubenswrapper[5106]: I0320 00:25:44.419545 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Mar 20 00:25:44 crc kubenswrapper[5106]: I0320 00:25:44.455245 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Mar 20 00:25:45 crc kubenswrapper[5106]: I0320 00:25:45.332134 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b","Type":"ContainerStarted","Data":"c0ea51158e2983296bed1d0e8f0bc305f55e600a6a704e421843811ef3cb7962"} Mar 20 00:25:45 crc kubenswrapper[5106]: I0320 00:25:45.332528 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b","Type":"ContainerStarted","Data":"de48e2be2aaeb7a63d993ca63e39aff56855ca4f4f4c8c87a6477a0839cc528b"} Mar 20 00:25:45 crc kubenswrapper[5106]: I0320 00:25:45.363272 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=21.510558843 podStartE2EDuration="34.363244812s" podCreationTimestamp="2026-03-20 00:25:11 +0000 UTC" firstStartedPulling="2026-03-20 00:25:32.173754938 +0000 UTC m=+986.607488992" lastFinishedPulling="2026-03-20 00:25:45.026440907 +0000 UTC m=+999.460174961" observedRunningTime="2026-03-20 00:25:45.356854763 +0000 UTC m=+999.790588847" watchObservedRunningTime="2026-03-20 00:25:45.363244812 +0000 UTC m=+999.796978866" Mar 20 00:25:45 crc kubenswrapper[5106]: I0320 00:25:45.414482 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Mar 20 00:25:46 crc kubenswrapper[5106]: I0320 00:25:46.864763 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555"] Mar 20 00:25:46 crc kubenswrapper[5106]: I0320 00:25:46.884353 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555"] Mar 20 00:25:46 crc kubenswrapper[5106]: I0320 00:25:46.884508 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:46 crc kubenswrapper[5106]: I0320 00:25:46.888012 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Mar 20 00:25:46 crc kubenswrapper[5106]: I0320 00:25:46.891338 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.041615 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d91ba544-095b-49de-bd6e-a44cac005bb7-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.041980 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x2l9\" (UniqueName: \"kubernetes.io/projected/d91ba544-095b-49de-bd6e-a44cac005bb7-kube-api-access-5x2l9\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.042014 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/d91ba544-095b-49de-bd6e-a44cac005bb7-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.042087 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d91ba544-095b-49de-bd6e-a44cac005bb7-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.143079 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5x2l9\" (UniqueName: \"kubernetes.io/projected/d91ba544-095b-49de-bd6e-a44cac005bb7-kube-api-access-5x2l9\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.143141 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/d91ba544-095b-49de-bd6e-a44cac005bb7-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.143233 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d91ba544-095b-49de-bd6e-a44cac005bb7-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.143265 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d91ba544-095b-49de-bd6e-a44cac005bb7-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.143976 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d91ba544-095b-49de-bd6e-a44cac005bb7-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.144969 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d91ba544-095b-49de-bd6e-a44cac005bb7-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.167258 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/d91ba544-095b-49de-bd6e-a44cac005bb7-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.174346 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x2l9\" (UniqueName: \"kubernetes.io/projected/d91ba544-095b-49de-bd6e-a44cac005bb7-kube-api-access-5x2l9\") pod \"default-cloud1-coll-event-smartgateway-5986c69f68-8z555\" (UID: \"d91ba544-095b-49de-bd6e-a44cac005bb7\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:47 crc kubenswrapper[5106]: I0320 00:25:47.209096 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.455710 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf"] Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.564058 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf"] Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.564223 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.566989 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.664829 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/27f8bbbc-bf77-4e31-ad8a-83849133a24c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.665014 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/27f8bbbc-bf77-4e31-ad8a-83849133a24c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.665164 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plx54\" (UniqueName: \"kubernetes.io/projected/27f8bbbc-bf77-4e31-ad8a-83849133a24c-kube-api-access-plx54\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.665238 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/27f8bbbc-bf77-4e31-ad8a-83849133a24c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.766601 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/27f8bbbc-bf77-4e31-ad8a-83849133a24c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.766691 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plx54\" (UniqueName: \"kubernetes.io/projected/27f8bbbc-bf77-4e31-ad8a-83849133a24c-kube-api-access-plx54\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.766731 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/27f8bbbc-bf77-4e31-ad8a-83849133a24c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.766804 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/27f8bbbc-bf77-4e31-ad8a-83849133a24c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.767319 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/27f8bbbc-bf77-4e31-ad8a-83849133a24c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.768017 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/27f8bbbc-bf77-4e31-ad8a-83849133a24c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.790197 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/27f8bbbc-bf77-4e31-ad8a-83849133a24c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.798821 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plx54\" (UniqueName: \"kubernetes.io/projected/27f8bbbc-bf77-4e31-ad8a-83849133a24c-kube-api-access-plx54\") pod \"default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf\" (UID: \"27f8bbbc-bf77-4e31-ad8a-83849133a24c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:48 crc kubenswrapper[5106]: I0320 00:25:48.882400 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" Mar 20 00:25:50 crc kubenswrapper[5106]: I0320 00:25:50.719618 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf"] Mar 20 00:25:50 crc kubenswrapper[5106]: I0320 00:25:50.982167 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555"] Mar 20 00:25:50 crc kubenswrapper[5106]: W0320 00:25:50.982692 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd91ba544_095b_49de_bd6e_a44cac005bb7.slice/crio-3a1406ebc4b9d75c2bddfa58ceacb5fc1de8e09db9cc383e4edf93454fc80e07 WatchSource:0}: Error finding container 3a1406ebc4b9d75c2bddfa58ceacb5fc1de8e09db9cc383e4edf93454fc80e07: Status 404 returned error can't find the container with id 3a1406ebc4b9d75c2bddfa58ceacb5fc1de8e09db9cc383e4edf93454fc80e07 Mar 20 00:25:51 crc kubenswrapper[5106]: I0320 00:25:51.379646 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" event={"ID":"d91ba544-095b-49de-bd6e-a44cac005bb7","Type":"ContainerStarted","Data":"3a1406ebc4b9d75c2bddfa58ceacb5fc1de8e09db9cc383e4edf93454fc80e07"} Mar 20 00:25:51 crc kubenswrapper[5106]: I0320 00:25:51.381359 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerStarted","Data":"0ad94c537d825a1f58d836a58181792b71d9c064967783a7974d09c72e6e18f3"} Mar 20 00:25:51 crc kubenswrapper[5106]: I0320 00:25:51.381399 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerStarted","Data":"7d30b12b65233f00281398dfce25688857a716e93e59bd5bab587505300b8ce7"} Mar 20 00:25:51 crc kubenswrapper[5106]: I0320 00:25:51.383025 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerStarted","Data":"ef7833975b43d3c6494b8c8d9cdfff6468b86f5556e4d510feab0706bc848b2d"} Mar 20 00:25:51 crc kubenswrapper[5106]: I0320 00:25:51.384751 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" event={"ID":"9e07806a-4738-42af-b42b-44ab2fc88123","Type":"ContainerStarted","Data":"6278773b92304dd0765a96493e7b4c51cd72f4c340aa241f9b9dfdf1e5586a24"} Mar 20 00:25:51 crc kubenswrapper[5106]: I0320 00:25:51.386314 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" event={"ID":"71b5b240-4732-4fcf-9100-675ccacc62e0","Type":"ContainerStarted","Data":"5926f68431f6afdd18024845821d0744206f48d3b4ad355667364fac277b2536"} Mar 20 00:25:52 crc kubenswrapper[5106]: I0320 00:25:52.409095 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" event={"ID":"d91ba544-095b-49de-bd6e-a44cac005bb7","Type":"ContainerStarted","Data":"4bef760778fde9122d52ea5a1918eb1884a4157f8945d709c95f1e9b64c3d2a1"} Mar 20 00:25:55 crc kubenswrapper[5106]: I0320 00:25:55.373032 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:25:55 crc kubenswrapper[5106]: I0320 00:25:55.373115 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.450143 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerStarted","Data":"74b5396b6572d90f17bf83ee79a02da9d5658c2b09e4a636a400a0d6bd64733f"} Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.453234 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerStarted","Data":"c9eb11ff0b60dcedb55e5b55c3fb0fbc83408bd892a6b477b5e7f00060d33a45"} Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.455482 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" event={"ID":"9e07806a-4738-42af-b42b-44ab2fc88123","Type":"ContainerStarted","Data":"06813e5b82befa7706e92dc47c795771b41a35985a9e8e24b406c229d843b37d"} Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.457288 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" event={"ID":"71b5b240-4732-4fcf-9100-675ccacc62e0","Type":"ContainerStarted","Data":"29d60fa83442f69bb8288ecbbc3c1707cb282eebc4112fdc136ece527ec2f0e6"} Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.459005 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" event={"ID":"d91ba544-095b-49de-bd6e-a44cac005bb7","Type":"ContainerStarted","Data":"a1bf6671d95fe25632442396ac17da8b049080a23bb73d5856166249f816f7ba"} Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.474955 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" podStartSLOduration=3.936415092 podStartE2EDuration="9.474938833s" podCreationTimestamp="2026-03-20 00:25:48 +0000 UTC" firstStartedPulling="2026-03-20 00:25:50.742019152 +0000 UTC m=+1005.175753206" lastFinishedPulling="2026-03-20 00:25:56.280542903 +0000 UTC m=+1010.714276947" observedRunningTime="2026-03-20 00:25:57.470232797 +0000 UTC m=+1011.903966851" watchObservedRunningTime="2026-03-20 00:25:57.474938833 +0000 UTC m=+1011.908672887" Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.486126 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" podStartSLOduration=6.25030114 podStartE2EDuration="11.48610772s" podCreationTimestamp="2026-03-20 00:25:46 +0000 UTC" firstStartedPulling="2026-03-20 00:25:50.985158466 +0000 UTC m=+1005.418892520" lastFinishedPulling="2026-03-20 00:25:56.220965046 +0000 UTC m=+1010.654699100" observedRunningTime="2026-03-20 00:25:57.482229494 +0000 UTC m=+1011.915963558" watchObservedRunningTime="2026-03-20 00:25:57.48610772 +0000 UTC m=+1011.919841774" Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.506378 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" podStartSLOduration=4.19947909 podStartE2EDuration="24.506356322s" podCreationTimestamp="2026-03-20 00:25:33 +0000 UTC" firstStartedPulling="2026-03-20 00:25:36.020888231 +0000 UTC m=+990.454622285" lastFinishedPulling="2026-03-20 00:25:56.327765463 +0000 UTC m=+1010.761499517" observedRunningTime="2026-03-20 00:25:57.498562609 +0000 UTC m=+1011.932296663" watchObservedRunningTime="2026-03-20 00:25:57.506356322 +0000 UTC m=+1011.940090376" Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.520820 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" podStartSLOduration=5.747106561 podStartE2EDuration="19.52080115s" podCreationTimestamp="2026-03-20 00:25:38 +0000 UTC" firstStartedPulling="2026-03-20 00:25:42.503588843 +0000 UTC m=+996.937322897" lastFinishedPulling="2026-03-20 00:25:56.277283432 +0000 UTC m=+1010.711017486" observedRunningTime="2026-03-20 00:25:57.517701043 +0000 UTC m=+1011.951435097" watchObservedRunningTime="2026-03-20 00:25:57.52080115 +0000 UTC m=+1011.954535204" Mar 20 00:25:57 crc kubenswrapper[5106]: I0320 00:25:57.538524 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" podStartSLOduration=3.853698847 podStartE2EDuration="27.538504158s" podCreationTimestamp="2026-03-20 00:25:30 +0000 UTC" firstStartedPulling="2026-03-20 00:25:32.501296802 +0000 UTC m=+986.935030856" lastFinishedPulling="2026-03-20 00:25:56.186102113 +0000 UTC m=+1010.619836167" observedRunningTime="2026-03-20 00:25:57.53292019 +0000 UTC m=+1011.966654244" watchObservedRunningTime="2026-03-20 00:25:57.538504158 +0000 UTC m=+1011.972238212" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.137303 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566106-6ld88"] Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.178701 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566106-6ld88"] Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.178758 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.181297 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.181926 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.182110 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.250773 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgz2\" (UniqueName: \"kubernetes.io/projected/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90-kube-api-access-szgz2\") pod \"auto-csr-approver-29566106-6ld88\" (UID: \"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90\") " pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.352506 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-szgz2\" (UniqueName: \"kubernetes.io/projected/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90-kube-api-access-szgz2\") pod \"auto-csr-approver-29566106-6ld88\" (UID: \"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90\") " pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.374195 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgz2\" (UniqueName: \"kubernetes.io/projected/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90-kube-api-access-szgz2\") pod \"auto-csr-approver-29566106-6ld88\" (UID: \"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90\") " pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.500122 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.934359 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-69dcw"] Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.934960 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" podUID="d20ab026-6f4e-4563-af61-88bef726e748" containerName="default-interconnect" containerID="cri-o://4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5" gracePeriod=30 Mar 20 00:26:00 crc kubenswrapper[5106]: I0320 00:26:00.991502 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566106-6ld88"] Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.303188 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.349622 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-hnvss"] Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.350325 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d20ab026-6f4e-4563-af61-88bef726e748" containerName="default-interconnect" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.350344 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="d20ab026-6f4e-4563-af61-88bef726e748" containerName="default-interconnect" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.350497 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="d20ab026-6f4e-4563-af61-88bef726e748" containerName="default-interconnect" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.356281 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.367211 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-hnvss"] Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.391496 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-credentials\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.391841 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-ca\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.391976 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d20ab026-6f4e-4563-af61-88bef726e748-sasl-config\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.392096 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-ca\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.392244 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-sasl-users\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.392343 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-credentials\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.392443 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh6qt\" (UniqueName: \"kubernetes.io/projected/d20ab026-6f4e-4563-af61-88bef726e748-kube-api-access-lh6qt\") pod \"d20ab026-6f4e-4563-af61-88bef726e748\" (UID: \"d20ab026-6f4e-4563-af61-88bef726e748\") " Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.394447 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d20ab026-6f4e-4563-af61-88bef726e748-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.412817 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.412874 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.412917 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.412969 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.413010 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.413037 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d20ab026-6f4e-4563-af61-88bef726e748-kube-api-access-lh6qt" (OuterVolumeSpecName: "kube-api-access-lh6qt") pod "d20ab026-6f4e-4563-af61-88bef726e748" (UID: "d20ab026-6f4e-4563-af61-88bef726e748"). InnerVolumeSpecName "kube-api-access-lh6qt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.489416 5106 generic.go:358] "Generic (PLEG): container finished" podID="d20ab026-6f4e-4563-af61-88bef726e748" containerID="4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5" exitCode=0 Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.489591 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" event={"ID":"d20ab026-6f4e-4563-af61-88bef726e748","Type":"ContainerDied","Data":"4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5"} Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.489828 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" event={"ID":"d20ab026-6f4e-4563-af61-88bef726e748","Type":"ContainerDied","Data":"8228c477c543a2ffe268a1821e323d61c33143af24b632623bae1d77a11288a2"} Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.489848 5106 scope.go:117] "RemoveContainer" containerID="4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.489687 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-69dcw" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.493926 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2qw\" (UniqueName: \"kubernetes.io/projected/956cc818-a907-4d48-9536-ff3a62e61e10-kube-api-access-vs2qw\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494038 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494115 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-sasl-users\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494259 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494331 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494415 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494495 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/956cc818-a907-4d48-9536-ff3a62e61e10-sasl-config\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494803 5106 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/d20ab026-6f4e-4563-af61-88bef726e748-sasl-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494892 5106 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.494955 5106 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-sasl-users\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.495040 5106 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.495149 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lh6qt\" (UniqueName: \"kubernetes.io/projected/d20ab026-6f4e-4563-af61-88bef726e748-kube-api-access-lh6qt\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.495210 5106 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.495295 5106 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/d20ab026-6f4e-4563-af61-88bef726e748-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.495184 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566106-6ld88" event={"ID":"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90","Type":"ContainerStarted","Data":"1ab8360629523aab1bdf60285287d2e56aa4b61debdd761e25ad63a9a5e701b7"} Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.500933 5106 generic.go:358] "Generic (PLEG): container finished" podID="27f8bbbc-bf77-4e31-ad8a-83849133a24c" containerID="0ad94c537d825a1f58d836a58181792b71d9c064967783a7974d09c72e6e18f3" exitCode=0 Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.501032 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerDied","Data":"0ad94c537d825a1f58d836a58181792b71d9c064967783a7974d09c72e6e18f3"} Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.501974 5106 scope.go:117] "RemoveContainer" containerID="0ad94c537d825a1f58d836a58181792b71d9c064967783a7974d09c72e6e18f3" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.507568 5106 generic.go:358] "Generic (PLEG): container finished" podID="a4dc4e0a-b766-4d09-ab7f-18dd765339bf" containerID="ef7833975b43d3c6494b8c8d9cdfff6468b86f5556e4d510feab0706bc848b2d" exitCode=0 Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.507863 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerDied","Data":"ef7833975b43d3c6494b8c8d9cdfff6468b86f5556e4d510feab0706bc848b2d"} Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.508551 5106 scope.go:117] "RemoveContainer" containerID="ef7833975b43d3c6494b8c8d9cdfff6468b86f5556e4d510feab0706bc848b2d" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.518394 5106 scope.go:117] "RemoveContainer" containerID="4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5" Mar 20 00:26:01 crc kubenswrapper[5106]: E0320 00:26:01.519470 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5\": container with ID starting with 4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5 not found: ID does not exist" containerID="4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.519517 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5"} err="failed to get container status \"4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5\": rpc error: code = NotFound desc = could not find container \"4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5\": container with ID starting with 4d566606b3125c229022b25439ba5d17639d8ca7ea4b97daaca78c8d924137f5 not found: ID does not exist" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.544625 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-69dcw"] Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.549338 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-69dcw"] Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.596763 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.596823 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.596884 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.596917 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/956cc818-a907-4d48-9536-ff3a62e61e10-sasl-config\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.596981 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vs2qw\" (UniqueName: \"kubernetes.io/projected/956cc818-a907-4d48-9536-ff3a62e61e10-kube-api-access-vs2qw\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.597022 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.597050 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-sasl-users\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.601562 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/956cc818-a907-4d48-9536-ff3a62e61e10-sasl-config\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.604074 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.607088 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-sasl-users\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.609159 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.611002 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.612538 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/956cc818-a907-4d48-9536-ff3a62e61e10-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.621500 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs2qw\" (UniqueName: \"kubernetes.io/projected/956cc818-a907-4d48-9536-ff3a62e61e10-kube-api-access-vs2qw\") pod \"default-interconnect-55bf8d5cb-hnvss\" (UID: \"956cc818-a907-4d48-9536-ff3a62e61e10\") " pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:01 crc kubenswrapper[5106]: I0320 00:26:01.674221 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.131504 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-hnvss"] Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.516791 5106 generic.go:358] "Generic (PLEG): container finished" podID="71b5b240-4732-4fcf-9100-675ccacc62e0" containerID="5926f68431f6afdd18024845821d0744206f48d3b4ad355667364fac277b2536" exitCode=0 Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.516853 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" event={"ID":"71b5b240-4732-4fcf-9100-675ccacc62e0","Type":"ContainerDied","Data":"5926f68431f6afdd18024845821d0744206f48d3b4ad355667364fac277b2536"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.517826 5106 scope.go:117] "RemoveContainer" containerID="5926f68431f6afdd18024845821d0744206f48d3b4ad355667364fac277b2536" Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.525141 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" event={"ID":"956cc818-a907-4d48-9536-ff3a62e61e10","Type":"ContainerStarted","Data":"eb88606c77f97c256c8e5e1ae0d60082e54c9504e44aa34d0ad5952647a4ef07"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.525188 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" event={"ID":"956cc818-a907-4d48-9536-ff3a62e61e10","Type":"ContainerStarted","Data":"d7d2a9120d0dd968b9ac373f05ec7b41df3a10a8a783d7b13fab834ed73761ea"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.527752 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566106-6ld88" event={"ID":"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90","Type":"ContainerStarted","Data":"3c65fc18b6c4cf5fe0694a1228003db5194716f8e8ada8a99070caf8f3e436c4"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.529646 5106 generic.go:358] "Generic (PLEG): container finished" podID="d91ba544-095b-49de-bd6e-a44cac005bb7" containerID="4bef760778fde9122d52ea5a1918eb1884a4157f8945d709c95f1e9b64c3d2a1" exitCode=0 Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.529765 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" event={"ID":"d91ba544-095b-49de-bd6e-a44cac005bb7","Type":"ContainerDied","Data":"4bef760778fde9122d52ea5a1918eb1884a4157f8945d709c95f1e9b64c3d2a1"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.530277 5106 scope.go:117] "RemoveContainer" containerID="4bef760778fde9122d52ea5a1918eb1884a4157f8945d709c95f1e9b64c3d2a1" Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.535903 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerStarted","Data":"a3fd35e467b044c1143b9ed845eaacc3906d87b499d93c92c1e95b67ea92a38f"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.555146 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerStarted","Data":"4107211047d535322d70a2abfa9fd0df95350248e8f1ea702e84c48f8c941b86"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.560003 5106 generic.go:358] "Generic (PLEG): container finished" podID="9e07806a-4738-42af-b42b-44ab2fc88123" containerID="6278773b92304dd0765a96493e7b4c51cd72f4c340aa241f9b9dfdf1e5586a24" exitCode=0 Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.560104 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" event={"ID":"9e07806a-4738-42af-b42b-44ab2fc88123","Type":"ContainerDied","Data":"6278773b92304dd0765a96493e7b4c51cd72f4c340aa241f9b9dfdf1e5586a24"} Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.560717 5106 scope.go:117] "RemoveContainer" containerID="6278773b92304dd0765a96493e7b4c51cd72f4c340aa241f9b9dfdf1e5586a24" Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.567619 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566106-6ld88" podStartSLOduration=1.555606705 podStartE2EDuration="2.567595426s" podCreationTimestamp="2026-03-20 00:26:00 +0000 UTC" firstStartedPulling="2026-03-20 00:26:00.96086931 +0000 UTC m=+1015.394603364" lastFinishedPulling="2026-03-20 00:26:01.972858031 +0000 UTC m=+1016.406592085" observedRunningTime="2026-03-20 00:26:02.555391604 +0000 UTC m=+1016.989125658" watchObservedRunningTime="2026-03-20 00:26:02.567595426 +0000 UTC m=+1017.001329480" Mar 20 00:26:02 crc kubenswrapper[5106]: I0320 00:26:02.614797 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-hnvss" podStartSLOduration=2.614776495 podStartE2EDuration="2.614776495s" podCreationTimestamp="2026-03-20 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 00:26:02.610885399 +0000 UTC m=+1017.044619453" watchObservedRunningTime="2026-03-20 00:26:02.614776495 +0000 UTC m=+1017.048510569" Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.205059 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d20ab026-6f4e-4563-af61-88bef726e748" path="/var/lib/kubelet/pods/d20ab026-6f4e-4563-af61-88bef726e748/volumes" Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.570499 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9" event={"ID":"71b5b240-4732-4fcf-9100-675ccacc62e0","Type":"ContainerStarted","Data":"d9935763e39420851e06fe5932428c412e2c0428bbec71d3e7f07c11bb712fa5"} Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.574563 5106 generic.go:358] "Generic (PLEG): container finished" podID="fce4f3c7-8ca1-463a-b63b-63d0a5d5af90" containerID="3c65fc18b6c4cf5fe0694a1228003db5194716f8e8ada8a99070caf8f3e436c4" exitCode=0 Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.574708 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566106-6ld88" event={"ID":"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90","Type":"ContainerDied","Data":"3c65fc18b6c4cf5fe0694a1228003db5194716f8e8ada8a99070caf8f3e436c4"} Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.577663 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-5986c69f68-8z555" event={"ID":"d91ba544-095b-49de-bd6e-a44cac005bb7","Type":"ContainerStarted","Data":"37e15442145fc74e9c82696eb4e0e09fef75f76852a59def147d282f05b8347a"} Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.580600 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerDied","Data":"a3fd35e467b044c1143b9ed845eaacc3906d87b499d93c92c1e95b67ea92a38f"} Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.580644 5106 scope.go:117] "RemoveContainer" containerID="0ad94c537d825a1f58d836a58181792b71d9c064967783a7974d09c72e6e18f3" Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.580599 5106 generic.go:358] "Generic (PLEG): container finished" podID="27f8bbbc-bf77-4e31-ad8a-83849133a24c" containerID="a3fd35e467b044c1143b9ed845eaacc3906d87b499d93c92c1e95b67ea92a38f" exitCode=0 Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.581899 5106 scope.go:117] "RemoveContainer" containerID="a3fd35e467b044c1143b9ed845eaacc3906d87b499d93c92c1e95b67ea92a38f" Mar 20 00:26:03 crc kubenswrapper[5106]: E0320 00:26:03.582233 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf_service-telemetry(27f8bbbc-bf77-4e31-ad8a-83849133a24c)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" podUID="27f8bbbc-bf77-4e31-ad8a-83849133a24c" Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.590992 5106 generic.go:358] "Generic (PLEG): container finished" podID="a4dc4e0a-b766-4d09-ab7f-18dd765339bf" containerID="4107211047d535322d70a2abfa9fd0df95350248e8f1ea702e84c48f8c941b86" exitCode=0 Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.591163 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerDied","Data":"4107211047d535322d70a2abfa9fd0df95350248e8f1ea702e84c48f8c941b86"} Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.592038 5106 scope.go:117] "RemoveContainer" containerID="4107211047d535322d70a2abfa9fd0df95350248e8f1ea702e84c48f8c941b86" Mar 20 00:26:03 crc kubenswrapper[5106]: E0320 00:26:03.592387 5106 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz_service-telemetry(a4dc4e0a-b766-4d09-ab7f-18dd765339bf)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" podUID="a4dc4e0a-b766-4d09-ab7f-18dd765339bf" Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.612044 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-7zcbq" event={"ID":"9e07806a-4738-42af-b42b-44ab2fc88123","Type":"ContainerStarted","Data":"bc9eea7966a83af79de56ececbfcc6339cde8915d78bad10451c461102e0af62"} Mar 20 00:26:03 crc kubenswrapper[5106]: I0320 00:26:03.618937 5106 scope.go:117] "RemoveContainer" containerID="ef7833975b43d3c6494b8c8d9cdfff6468b86f5556e4d510feab0706bc848b2d" Mar 20 00:26:04 crc kubenswrapper[5106]: I0320 00:26:04.889785 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:04 crc kubenswrapper[5106]: I0320 00:26:04.948755 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szgz2\" (UniqueName: \"kubernetes.io/projected/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90-kube-api-access-szgz2\") pod \"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90\" (UID: \"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90\") " Mar 20 00:26:04 crc kubenswrapper[5106]: I0320 00:26:04.955859 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90-kube-api-access-szgz2" (OuterVolumeSpecName: "kube-api-access-szgz2") pod "fce4f3c7-8ca1-463a-b63b-63d0a5d5af90" (UID: "fce4f3c7-8ca1-463a-b63b-63d0a5d5af90"). InnerVolumeSpecName "kube-api-access-szgz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:26:05 crc kubenswrapper[5106]: I0320 00:26:05.050543 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-szgz2\" (UniqueName: \"kubernetes.io/projected/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90-kube-api-access-szgz2\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:05 crc kubenswrapper[5106]: I0320 00:26:05.613195 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566100-rlzj4"] Mar 20 00:26:05 crc kubenswrapper[5106]: I0320 00:26:05.618707 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566100-rlzj4"] Mar 20 00:26:05 crc kubenswrapper[5106]: I0320 00:26:05.631262 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566106-6ld88" Mar 20 00:26:05 crc kubenswrapper[5106]: I0320 00:26:05.631306 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566106-6ld88" event={"ID":"fce4f3c7-8ca1-463a-b63b-63d0a5d5af90","Type":"ContainerDied","Data":"1ab8360629523aab1bdf60285287d2e56aa4b61debdd761e25ad63a9a5e701b7"} Mar 20 00:26:05 crc kubenswrapper[5106]: I0320 00:26:05.631347 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ab8360629523aab1bdf60285287d2e56aa4b61debdd761e25ad63a9a5e701b7" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.175011 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da609d33-74de-4b65-8e69-9f577e0f3605" path="/var/lib/kubelet/pods/da609d33-74de-4b65-8e69-9f577e0f3605/volumes" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.175954 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.178001 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fce4f3c7-8ca1-463a-b63b-63d0a5d5af90" containerName="oc" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.178039 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="fce4f3c7-8ca1-463a-b63b-63d0a5d5af90" containerName="oc" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.178186 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="fce4f3c7-8ca1-463a-b63b-63d0a5d5af90" containerName="oc" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.408162 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.408416 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.410755 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.411688 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.495213 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-qdr-test-config\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.495303 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.495348 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4n6\" (UniqueName: \"kubernetes.io/projected/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-kube-api-access-jq4n6\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.596852 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-qdr-test-config\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.597250 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.597420 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4n6\" (UniqueName: \"kubernetes.io/projected/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-kube-api-access-jq4n6\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.598100 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-qdr-test-config\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.608302 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.626815 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4n6\" (UniqueName: \"kubernetes.io/projected/28fd6c3e-b903-4cdb-9ad2-dba537988ad9-kube-api-access-jq4n6\") pod \"qdr-test\" (UID: \"28fd6c3e-b903-4cdb-9ad2-dba537988ad9\") " pod="service-telemetry/qdr-test" Mar 20 00:26:07 crc kubenswrapper[5106]: I0320 00:26:07.734270 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Mar 20 00:26:08 crc kubenswrapper[5106]: I0320 00:26:08.197013 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Mar 20 00:26:08 crc kubenswrapper[5106]: W0320 00:26:08.206114 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28fd6c3e_b903_4cdb_9ad2_dba537988ad9.slice/crio-be5a4d286f71b1003021255c488084f508c3b4154462dbf39d232b31f3988af7 WatchSource:0}: Error finding container be5a4d286f71b1003021255c488084f508c3b4154462dbf39d232b31f3988af7: Status 404 returned error can't find the container with id be5a4d286f71b1003021255c488084f508c3b4154462dbf39d232b31f3988af7 Mar 20 00:26:08 crc kubenswrapper[5106]: I0320 00:26:08.651558 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"28fd6c3e-b903-4cdb-9ad2-dba537988ad9","Type":"ContainerStarted","Data":"be5a4d286f71b1003021255c488084f508c3b4154462dbf39d232b31f3988af7"} Mar 20 00:26:17 crc kubenswrapper[5106]: I0320 00:26:17.168815 5106 scope.go:117] "RemoveContainer" containerID="a3fd35e467b044c1143b9ed845eaacc3906d87b499d93c92c1e95b67ea92a38f" Mar 20 00:26:18 crc kubenswrapper[5106]: I0320 00:26:18.160920 5106 scope.go:117] "RemoveContainer" containerID="4107211047d535322d70a2abfa9fd0df95350248e8f1ea702e84c48f8c941b86" Mar 20 00:26:22 crc kubenswrapper[5106]: I0320 00:26:22.785286 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz" event={"ID":"a4dc4e0a-b766-4d09-ab7f-18dd765339bf","Type":"ContainerStarted","Data":"a02827c292af4cb3cd710945a95922f23ca1339edc5527b75816b595d4f6b41e"} Mar 20 00:26:22 crc kubenswrapper[5106]: I0320 00:26:22.790699 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"28fd6c3e-b903-4cdb-9ad2-dba537988ad9","Type":"ContainerStarted","Data":"15885473a43ff1b3a31cc4fa9cb60472c1e28dcf25eaabd8cfba396ecf7cedcf"} Mar 20 00:26:22 crc kubenswrapper[5106]: I0320 00:26:22.793135 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf" event={"ID":"27f8bbbc-bf77-4e31-ad8a-83849133a24c","Type":"ContainerStarted","Data":"631cba598535c17e6503ae09ce2164760eae33440bbd534f63d303b9e5eef6e2"} Mar 20 00:26:22 crc kubenswrapper[5106]: I0320 00:26:22.871789 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.2599005500000002 podStartE2EDuration="15.87173938s" podCreationTimestamp="2026-03-20 00:26:07 +0000 UTC" firstStartedPulling="2026-03-20 00:26:08.20843859 +0000 UTC m=+1022.642172644" lastFinishedPulling="2026-03-20 00:26:21.82027739 +0000 UTC m=+1036.254011474" observedRunningTime="2026-03-20 00:26:22.865207614 +0000 UTC m=+1037.298941668" watchObservedRunningTime="2026-03-20 00:26:22.87173938 +0000 UTC m=+1037.305473454" Mar 20 00:26:22 crc kubenswrapper[5106]: I0320 00:26:22.905791 5106 scope.go:117] "RemoveContainer" containerID="4dccf8ae829d970d52e52f61edf719b67b9506e7b42bd8575132d164c2af7193" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.215786 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-2zngs"] Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.235728 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-2zngs"] Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.235859 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.241144 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.241278 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.241375 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.241280 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.241441 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.241614 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.365893 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.365961 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.366030 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-healthcheck-log\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.366106 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-publisher\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.366135 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-sensubility-config\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.366155 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-config\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.366177 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4s89\" (UniqueName: \"kubernetes.io/projected/9f9af3a9-3581-49f5-af1e-86058ae68bff-kube-api-access-s4s89\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.467891 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.467951 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.467980 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-healthcheck-log\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.468037 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-publisher\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.468058 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-sensubility-config\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.468078 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-config\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.468093 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s4s89\" (UniqueName: \"kubernetes.io/projected/9f9af3a9-3581-49f5-af1e-86058ae68bff-kube-api-access-s4s89\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.469384 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.469972 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.470497 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-healthcheck-log\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.471203 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-sensubility-config\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.471262 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-config\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.471715 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-publisher\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.494870 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4s89\" (UniqueName: \"kubernetes.io/projected/9f9af3a9-3581-49f5-af1e-86058ae68bff-kube-api-access-s4s89\") pod \"stf-smoketest-smoke1-2zngs\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.537354 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.543543 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.546119 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.553593 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.675111 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt9zt\" (UniqueName: \"kubernetes.io/projected/9fddd8d0-0b16-46b0-a722-ca48c7199f1c-kube-api-access-vt9zt\") pod \"curl\" (UID: \"9fddd8d0-0b16-46b0-a722-ca48c7199f1c\") " pod="service-telemetry/curl" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.776985 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vt9zt\" (UniqueName: \"kubernetes.io/projected/9fddd8d0-0b16-46b0-a722-ca48c7199f1c-kube-api-access-vt9zt\") pod \"curl\" (UID: \"9fddd8d0-0b16-46b0-a722-ca48c7199f1c\") " pod="service-telemetry/curl" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.796602 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt9zt\" (UniqueName: \"kubernetes.io/projected/9fddd8d0-0b16-46b0-a722-ca48c7199f1c-kube-api-access-vt9zt\") pod \"curl\" (UID: \"9fddd8d0-0b16-46b0-a722-ca48c7199f1c\") " pod="service-telemetry/curl" Mar 20 00:26:23 crc kubenswrapper[5106]: I0320 00:26:23.928003 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Mar 20 00:26:24 crc kubenswrapper[5106]: I0320 00:26:24.001346 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-2zngs"] Mar 20 00:26:24 crc kubenswrapper[5106]: I0320 00:26:24.352127 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Mar 20 00:26:24 crc kubenswrapper[5106]: I0320 00:26:24.809551 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"9fddd8d0-0b16-46b0-a722-ca48c7199f1c","Type":"ContainerStarted","Data":"534fba9b151a88ec6a144e295e6de6cc5b8280e4082b4b0f1cf8d73363f9793b"} Mar 20 00:26:24 crc kubenswrapper[5106]: I0320 00:26:24.812290 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-2zngs" event={"ID":"9f9af3a9-3581-49f5-af1e-86058ae68bff","Type":"ContainerStarted","Data":"28bb154d30f1e106bfe378046e85cde7ce419ba1dbd5ff462a4dc08e0558d255"} Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.374034 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.374105 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.374154 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.374797 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6986ec753318922c38954c5594d06021e7ff8e83bd99bff58c1e865b369e05df"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.374845 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://6986ec753318922c38954c5594d06021e7ff8e83bd99bff58c1e865b369e05df" gracePeriod=600 Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.821768 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="6986ec753318922c38954c5594d06021e7ff8e83bd99bff58c1e865b369e05df" exitCode=0 Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.821857 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"6986ec753318922c38954c5594d06021e7ff8e83bd99bff58c1e865b369e05df"} Mar 20 00:26:25 crc kubenswrapper[5106]: I0320 00:26:25.822260 5106 scope.go:117] "RemoveContainer" containerID="b9698c7bd4bd271067cba47912a53b2331be94e66a7a5d4468da4bc263f23f37" Mar 20 00:26:27 crc kubenswrapper[5106]: I0320 00:26:27.841545 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"cf7ec03b37e8c742f509fc6499c29d91dba9387492f15bc72efea2582dec2229"} Mar 20 00:26:35 crc kubenswrapper[5106]: I0320 00:26:35.910231 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-2zngs" event={"ID":"9f9af3a9-3581-49f5-af1e-86058ae68bff","Type":"ContainerStarted","Data":"e361eabe965d1ea05412a3bf729727172c1074ed347e631668bbd486edfb354f"} Mar 20 00:26:35 crc kubenswrapper[5106]: I0320 00:26:35.912077 5106 generic.go:358] "Generic (PLEG): container finished" podID="9fddd8d0-0b16-46b0-a722-ca48c7199f1c" containerID="318938a84e737767031ba35294d185b92e02931c335e7d8709249cab9e37b75b" exitCode=0 Mar 20 00:26:35 crc kubenswrapper[5106]: I0320 00:26:35.912172 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"9fddd8d0-0b16-46b0-a722-ca48c7199f1c","Type":"ContainerDied","Data":"318938a84e737767031ba35294d185b92e02931c335e7d8709249cab9e37b75b"} Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.415656 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.476357 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt9zt\" (UniqueName: \"kubernetes.io/projected/9fddd8d0-0b16-46b0-a722-ca48c7199f1c-kube-api-access-vt9zt\") pod \"9fddd8d0-0b16-46b0-a722-ca48c7199f1c\" (UID: \"9fddd8d0-0b16-46b0-a722-ca48c7199f1c\") " Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.482814 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fddd8d0-0b16-46b0-a722-ca48c7199f1c-kube-api-access-vt9zt" (OuterVolumeSpecName: "kube-api-access-vt9zt") pod "9fddd8d0-0b16-46b0-a722-ca48c7199f1c" (UID: "9fddd8d0-0b16-46b0-a722-ca48c7199f1c"). InnerVolumeSpecName "kube-api-access-vt9zt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.579325 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vt9zt\" (UniqueName: \"kubernetes.io/projected/9fddd8d0-0b16-46b0-a722-ca48c7199f1c-kube-api-access-vt9zt\") on node \"crc\" DevicePath \"\"" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.588465 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_9fddd8d0-0b16-46b0-a722-ca48c7199f1c/curl/0.log" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.906392 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-ssxqm_345c517e-a922-4bff-b8a1-cf4f6b8e08c3/prometheus-webhook-snmp/0.log" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.952063 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-2zngs" event={"ID":"9f9af3a9-3581-49f5-af1e-86058ae68bff","Type":"ContainerStarted","Data":"c1c8e034e2643d0f7fa1b91292fef1181ac10d07fdcbd73c9b94b3204c0e33c4"} Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.955178 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"9fddd8d0-0b16-46b0-a722-ca48c7199f1c","Type":"ContainerDied","Data":"534fba9b151a88ec6a144e295e6de6cc5b8280e4082b4b0f1cf8d73363f9793b"} Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.955207 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.955224 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534fba9b151a88ec6a144e295e6de6cc5b8280e4082b4b0f1cf8d73363f9793b" Mar 20 00:26:41 crc kubenswrapper[5106]: I0320 00:26:41.989315 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-2zngs" podStartSLOduration=1.5213119480000001 podStartE2EDuration="18.989292366s" podCreationTimestamp="2026-03-20 00:26:23 +0000 UTC" firstStartedPulling="2026-03-20 00:26:24.01827448 +0000 UTC m=+1038.452008534" lastFinishedPulling="2026-03-20 00:26:41.486254898 +0000 UTC m=+1055.919988952" observedRunningTime="2026-03-20 00:26:41.9792025 +0000 UTC m=+1056.412936554" watchObservedRunningTime="2026-03-20 00:26:41.989292366 +0000 UTC m=+1056.423026430" Mar 20 00:27:10 crc kubenswrapper[5106]: I0320 00:27:10.160439 5106 generic.go:358] "Generic (PLEG): container finished" podID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerID="e361eabe965d1ea05412a3bf729727172c1074ed347e631668bbd486edfb354f" exitCode=0 Mar 20 00:27:10 crc kubenswrapper[5106]: I0320 00:27:10.160549 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-2zngs" event={"ID":"9f9af3a9-3581-49f5-af1e-86058ae68bff","Type":"ContainerDied","Data":"e361eabe965d1ea05412a3bf729727172c1074ed347e631668bbd486edfb354f"} Mar 20 00:27:10 crc kubenswrapper[5106]: I0320 00:27:10.161785 5106 scope.go:117] "RemoveContainer" containerID="e361eabe965d1ea05412a3bf729727172c1074ed347e631668bbd486edfb354f" Mar 20 00:27:12 crc kubenswrapper[5106]: I0320 00:27:12.089282 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-ssxqm_345c517e-a922-4bff-b8a1-cf4f6b8e08c3/prometheus-webhook-snmp/0.log" Mar 20 00:27:14 crc kubenswrapper[5106]: I0320 00:27:14.201428 5106 generic.go:358] "Generic (PLEG): container finished" podID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerID="c1c8e034e2643d0f7fa1b91292fef1181ac10d07fdcbd73c9b94b3204c0e33c4" exitCode=0 Mar 20 00:27:14 crc kubenswrapper[5106]: I0320 00:27:14.201618 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-2zngs" event={"ID":"9f9af3a9-3581-49f5-af1e-86058ae68bff","Type":"ContainerDied","Data":"c1c8e034e2643d0f7fa1b91292fef1181ac10d07fdcbd73c9b94b3204c0e33c4"} Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.486531 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561030 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-sensubility-config\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561087 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-publisher\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561134 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4s89\" (UniqueName: \"kubernetes.io/projected/9f9af3a9-3581-49f5-af1e-86058ae68bff-kube-api-access-s4s89\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561161 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-config\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561264 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-entrypoint-script\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561295 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-entrypoint-script\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.561393 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-healthcheck-log\") pod \"9f9af3a9-3581-49f5-af1e-86058ae68bff\" (UID: \"9f9af3a9-3581-49f5-af1e-86058ae68bff\") " Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.581068 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9af3a9-3581-49f5-af1e-86058ae68bff-kube-api-access-s4s89" (OuterVolumeSpecName: "kube-api-access-s4s89") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "kube-api-access-s4s89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.582639 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.582753 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.583434 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.583672 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.583786 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.585375 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "9f9af3a9-3581-49f5-af1e-86058ae68bff" (UID: "9f9af3a9-3581-49f5-af1e-86058ae68bff"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662779 5106 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662811 5106 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-healthcheck-log\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662823 5106 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-sensubility-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662835 5106 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662845 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s4s89\" (UniqueName: \"kubernetes.io/projected/9f9af3a9-3581-49f5-af1e-86058ae68bff-kube-api-access-s4s89\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662856 5106 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-collectd-config\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:15 crc kubenswrapper[5106]: I0320 00:27:15.662865 5106 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/9f9af3a9-3581-49f5-af1e-86058ae68bff-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Mar 20 00:27:16 crc kubenswrapper[5106]: I0320 00:27:16.221266 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-2zngs" event={"ID":"9f9af3a9-3581-49f5-af1e-86058ae68bff","Type":"ContainerDied","Data":"28bb154d30f1e106bfe378046e85cde7ce419ba1dbd5ff462a4dc08e0558d255"} Mar 20 00:27:16 crc kubenswrapper[5106]: I0320 00:27:16.221604 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28bb154d30f1e106bfe378046e85cde7ce419ba1dbd5ff462a4dc08e0558d255" Mar 20 00:27:16 crc kubenswrapper[5106]: I0320 00:27:16.221355 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-2zngs" Mar 20 00:27:17 crc kubenswrapper[5106]: I0320 00:27:17.526426 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-2zngs_9f9af3a9-3581-49f5-af1e-86058ae68bff/smoketest-collectd/0.log" Mar 20 00:27:17 crc kubenswrapper[5106]: I0320 00:27:17.899238 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-2zngs_9f9af3a9-3581-49f5-af1e-86058ae68bff/smoketest-ceilometer/0.log" Mar 20 00:27:18 crc kubenswrapper[5106]: I0320 00:27:18.256949 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-hnvss_956cc818-a907-4d48-9536-ff3a62e61e10/default-interconnect/0.log" Mar 20 00:27:18 crc kubenswrapper[5106]: I0320 00:27:18.562828 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-7zcbq_9e07806a-4738-42af-b42b-44ab2fc88123/bridge/1.log" Mar 20 00:27:18 crc kubenswrapper[5106]: I0320 00:27:18.879196 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-7zcbq_9e07806a-4738-42af-b42b-44ab2fc88123/sg-core/0.log" Mar 20 00:27:19 crc kubenswrapper[5106]: I0320 00:27:19.160468 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5986c69f68-8z555_d91ba544-095b-49de-bd6e-a44cac005bb7/bridge/1.log" Mar 20 00:27:19 crc kubenswrapper[5106]: I0320 00:27:19.422371 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-5986c69f68-8z555_d91ba544-095b-49de-bd6e-a44cac005bb7/sg-core/0.log" Mar 20 00:27:19 crc kubenswrapper[5106]: I0320 00:27:19.704533 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9_71b5b240-4732-4fcf-9100-675ccacc62e0/bridge/1.log" Mar 20 00:27:19 crc kubenswrapper[5106]: I0320 00:27:19.960288 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-llsp9_71b5b240-4732-4fcf-9100-675ccacc62e0/sg-core/0.log" Mar 20 00:27:20 crc kubenswrapper[5106]: I0320 00:27:20.259914 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf_27f8bbbc-bf77-4e31-ad8a-83849133a24c/bridge/2.log" Mar 20 00:27:20 crc kubenswrapper[5106]: I0320 00:27:20.519638 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7c987d6559-j2ndf_27f8bbbc-bf77-4e31-ad8a-83849133a24c/sg-core/0.log" Mar 20 00:27:20 crc kubenswrapper[5106]: I0320 00:27:20.819456 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz_a4dc4e0a-b766-4d09-ab7f-18dd765339bf/bridge/2.log" Mar 20 00:27:21 crc kubenswrapper[5106]: I0320 00:27:21.093151 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-zpkvz_a4dc4e0a-b766-4d09-ab7f-18dd765339bf/sg-core/0.log" Mar 20 00:27:23 crc kubenswrapper[5106]: I0320 00:27:23.784084 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-fddbdb85c-cpbpr_60b774a2-0c92-4f09-898c-49a071b55d6f/operator/0.log" Mar 20 00:27:24 crc kubenswrapper[5106]: I0320 00:27:24.077919 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_353db55f-dddd-44dc-aade-e75b5d1783e7/prometheus/0.log" Mar 20 00:27:24 crc kubenswrapper[5106]: I0320 00:27:24.374840 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_6a45ec79-6631-4cc3-a937-0b5e42ec3c8c/elasticsearch/0.log" Mar 20 00:27:24 crc kubenswrapper[5106]: I0320 00:27:24.613278 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-ssxqm_345c517e-a922-4bff-b8a1-cf4f6b8e08c3/prometheus-webhook-snmp/0.log" Mar 20 00:27:24 crc kubenswrapper[5106]: I0320 00:27:24.867734 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_6f6ee6c2-a774-4bd3-91f4-bcccfb10e93b/alertmanager/0.log" Mar 20 00:27:37 crc kubenswrapper[5106]: I0320 00:27:37.285954 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7f569c45b4-gsq27_fa4b7b99-2abc-493d-8112-ce9b971dbef1/operator/0.log" Mar 20 00:27:40 crc kubenswrapper[5106]: I0320 00:27:40.059495 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-fddbdb85c-cpbpr_60b774a2-0c92-4f09-898c-49a071b55d6f/operator/0.log" Mar 20 00:27:40 crc kubenswrapper[5106]: I0320 00:27:40.299517 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_28fd6c3e-b903-4cdb-9ad2-dba537988ad9/qdr/0.log" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.134930 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566108-965pz"] Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136301 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9fddd8d0-0b16-46b0-a722-ca48c7199f1c" containerName="curl" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136319 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fddd8d0-0b16-46b0-a722-ca48c7199f1c" containerName="curl" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136370 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerName="smoketest-ceilometer" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136377 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerName="smoketest-ceilometer" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136390 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerName="smoketest-collectd" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136398 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerName="smoketest-collectd" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136536 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="9fddd8d0-0b16-46b0-a722-ca48c7199f1c" containerName="curl" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136549 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerName="smoketest-ceilometer" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.136567 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f9af3a9-3581-49f5-af1e-86058ae68bff" containerName="smoketest-collectd" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.148045 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566108-965pz"] Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.148262 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.151352 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.151472 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.153379 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.234344 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj62j\" (UniqueName: \"kubernetes.io/projected/48b85866-210c-438f-8ffe-6cd2b1cce790-kube-api-access-qj62j\") pod \"auto-csr-approver-29566108-965pz\" (UID: \"48b85866-210c-438f-8ffe-6cd2b1cce790\") " pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.336709 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qj62j\" (UniqueName: \"kubernetes.io/projected/48b85866-210c-438f-8ffe-6cd2b1cce790-kube-api-access-qj62j\") pod \"auto-csr-approver-29566108-965pz\" (UID: \"48b85866-210c-438f-8ffe-6cd2b1cce790\") " pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.355958 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj62j\" (UniqueName: \"kubernetes.io/projected/48b85866-210c-438f-8ffe-6cd2b1cce790-kube-api-access-qj62j\") pod \"auto-csr-approver-29566108-965pz\" (UID: \"48b85866-210c-438f-8ffe-6cd2b1cce790\") " pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.468113 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:00 crc kubenswrapper[5106]: I0320 00:28:00.667196 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566108-965pz"] Mar 20 00:28:01 crc kubenswrapper[5106]: I0320 00:28:01.620330 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566108-965pz" event={"ID":"48b85866-210c-438f-8ffe-6cd2b1cce790","Type":"ContainerStarted","Data":"44a8d3fb1e342f8f82ad98651af4eb960c01147c184720f7043bc7cb06d263bc"} Mar 20 00:28:02 crc kubenswrapper[5106]: I0320 00:28:02.630272 5106 generic.go:358] "Generic (PLEG): container finished" podID="48b85866-210c-438f-8ffe-6cd2b1cce790" containerID="b31212a4f680652fb4cdf6758131b78526d3c89d19d637a9a559af59289d266f" exitCode=0 Mar 20 00:28:02 crc kubenswrapper[5106]: I0320 00:28:02.630335 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566108-965pz" event={"ID":"48b85866-210c-438f-8ffe-6cd2b1cce790","Type":"ContainerDied","Data":"b31212a4f680652fb4cdf6758131b78526d3c89d19d637a9a559af59289d266f"} Mar 20 00:28:03 crc kubenswrapper[5106]: I0320 00:28:03.915948 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:03 crc kubenswrapper[5106]: I0320 00:28:03.991924 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj62j\" (UniqueName: \"kubernetes.io/projected/48b85866-210c-438f-8ffe-6cd2b1cce790-kube-api-access-qj62j\") pod \"48b85866-210c-438f-8ffe-6cd2b1cce790\" (UID: \"48b85866-210c-438f-8ffe-6cd2b1cce790\") " Mar 20 00:28:03 crc kubenswrapper[5106]: I0320 00:28:03.998915 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b85866-210c-438f-8ffe-6cd2b1cce790-kube-api-access-qj62j" (OuterVolumeSpecName: "kube-api-access-qj62j") pod "48b85866-210c-438f-8ffe-6cd2b1cce790" (UID: "48b85866-210c-438f-8ffe-6cd2b1cce790"). InnerVolumeSpecName "kube-api-access-qj62j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.093651 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qj62j\" (UniqueName: \"kubernetes.io/projected/48b85866-210c-438f-8ffe-6cd2b1cce790-kube-api-access-qj62j\") on node \"crc\" DevicePath \"\"" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.096295 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-g6584/must-gather-sz5fs"] Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.097049 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48b85866-210c-438f-8ffe-6cd2b1cce790" containerName="oc" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.097069 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b85866-210c-438f-8ffe-6cd2b1cce790" containerName="oc" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.097204 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="48b85866-210c-438f-8ffe-6cd2b1cce790" containerName="oc" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.101204 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.103527 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-g6584\"/\"openshift-service-ca.crt\"" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.103563 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-g6584\"/\"kube-root-ca.crt\"" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.128022 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g6584/must-gather-sz5fs"] Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.195634 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-must-gather-output\") pod \"must-gather-sz5fs\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.195754 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2g5\" (UniqueName: \"kubernetes.io/projected/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-kube-api-access-dd2g5\") pod \"must-gather-sz5fs\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.297037 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2g5\" (UniqueName: \"kubernetes.io/projected/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-kube-api-access-dd2g5\") pod \"must-gather-sz5fs\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.297513 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-must-gather-output\") pod \"must-gather-sz5fs\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.298063 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-must-gather-output\") pod \"must-gather-sz5fs\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.317990 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2g5\" (UniqueName: \"kubernetes.io/projected/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-kube-api-access-dd2g5\") pod \"must-gather-sz5fs\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.417482 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:28:04 crc kubenswrapper[5106]: W0320 00:28:04.643495 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7846f3f3_7b74_42ae_a08f_67b54cd3c91f.slice/crio-76f82abf077bb1f7241312a38907a4c300844a3e0149dc100aaa1515f6adecaf WatchSource:0}: Error finding container 76f82abf077bb1f7241312a38907a4c300844a3e0149dc100aaa1515f6adecaf: Status 404 returned error can't find the container with id 76f82abf077bb1f7241312a38907a4c300844a3e0149dc100aaa1515f6adecaf Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.644461 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-g6584/must-gather-sz5fs"] Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.655302 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g6584/must-gather-sz5fs" event={"ID":"7846f3f3-7b74-42ae-a08f-67b54cd3c91f","Type":"ContainerStarted","Data":"76f82abf077bb1f7241312a38907a4c300844a3e0149dc100aaa1515f6adecaf"} Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.657329 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566108-965pz" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.657378 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566108-965pz" event={"ID":"48b85866-210c-438f-8ffe-6cd2b1cce790","Type":"ContainerDied","Data":"44a8d3fb1e342f8f82ad98651af4eb960c01147c184720f7043bc7cb06d263bc"} Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.657435 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a8d3fb1e342f8f82ad98651af4eb960c01147c184720f7043bc7cb06d263bc" Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.978148 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566102-tjhlf"] Mar 20 00:28:04 crc kubenswrapper[5106]: I0320 00:28:04.984016 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566102-tjhlf"] Mar 20 00:28:05 crc kubenswrapper[5106]: I0320 00:28:05.169657 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d134799c-135a-45ed-910c-a8a191d5232d" path="/var/lib/kubelet/pods/d134799c-135a-45ed-910c-a8a191d5232d/volumes" Mar 20 00:28:10 crc kubenswrapper[5106]: I0320 00:28:10.702992 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g6584/must-gather-sz5fs" event={"ID":"7846f3f3-7b74-42ae-a08f-67b54cd3c91f","Type":"ContainerStarted","Data":"bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35"} Mar 20 00:28:10 crc kubenswrapper[5106]: I0320 00:28:10.703661 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g6584/must-gather-sz5fs" event={"ID":"7846f3f3-7b74-42ae-a08f-67b54cd3c91f","Type":"ContainerStarted","Data":"fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9"} Mar 20 00:28:23 crc kubenswrapper[5106]: I0320 00:28:23.042573 5106 scope.go:117] "RemoveContainer" containerID="df1bbd8ee42e00d6fccd62e0d5e65b872d26a56101b94968ec35a3fcb8b0a0ce" Mar 20 00:28:50 crc kubenswrapper[5106]: I0320 00:28:50.366598 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-w2brd_992000e3-50f4-48fa-8a55-58bfade85d0c/control-plane-machine-set-operator/0.log" Mar 20 00:28:50 crc kubenswrapper[5106]: I0320 00:28:50.492495 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-8jhlx_a22b44ae-8b94-4d76-9211-859b665d08cb/kube-rbac-proxy/0.log" Mar 20 00:28:50 crc kubenswrapper[5106]: I0320 00:28:50.520235 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-8jhlx_a22b44ae-8b94-4d76-9211-859b665d08cb/machine-api-operator/0.log" Mar 20 00:28:55 crc kubenswrapper[5106]: I0320 00:28:55.373843 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:28:55 crc kubenswrapper[5106]: I0320 00:28:55.374300 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:29:01 crc kubenswrapper[5106]: I0320 00:29:01.804983 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-xrn2l_0ab713b6-d230-4252-9220-51441f61c903/cert-manager-controller/0.log" Mar 20 00:29:01 crc kubenswrapper[5106]: I0320 00:29:01.958774 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-5s6ln_2e751cb4-8673-4c7b-91fd-8d080e2ddcfd/cert-manager-cainjector/0.log" Mar 20 00:29:01 crc kubenswrapper[5106]: I0320 00:29:01.996292 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-vc4jz_3971612b-b9d3-4678-859a-01070cad10d1/cert-manager-webhook/0.log" Mar 20 00:29:07 crc kubenswrapper[5106]: I0320 00:29:07.675006 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:29:07 crc kubenswrapper[5106]: I0320 00:29:07.714369 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xtksh_9da3e0a0-f6ab-4f57-925e-c59772b3d6d9/kube-multus/0.log" Mar 20 00:29:07 crc kubenswrapper[5106]: I0320 00:29:07.723360 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:29:07 crc kubenswrapper[5106]: I0320 00:29:07.730399 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575dc4b4cf-qlhmn_93e57ca7-278b-47c3-a3ae-7c07849de478/oauth-openshift/2.log" Mar 20 00:29:07 crc kubenswrapper[5106]: I0320 00:29:07.762679 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xtksh_9da3e0a0-f6ab-4f57-925e-c59772b3d6d9/kube-multus/0.log" Mar 20 00:29:07 crc kubenswrapper[5106]: I0320 00:29:07.767572 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Mar 20 00:29:15 crc kubenswrapper[5106]: I0320 00:29:15.381430 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-55568fc96c-xxrkx_a5a00efc-0f62-4415-97e0-e0bfd2f1276a/prometheus-operator/0.log" Mar 20 00:29:15 crc kubenswrapper[5106]: I0320 00:29:15.460635 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-659dbf9598-bbc28_2252d2cb-9b32-4137-8368-8b6c9bf4a267/prometheus-operator-admission-webhook/0.log" Mar 20 00:29:15 crc kubenswrapper[5106]: I0320 00:29:15.597558 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-659dbf9598-p8g95_343fff88-557a-4473-b878-7badd8470e8c/prometheus-operator-admission-webhook/0.log" Mar 20 00:29:15 crc kubenswrapper[5106]: I0320 00:29:15.653967 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-587f9c8867-zp5zg_eca89ef6-a7ff-48b6-a250-3b10b73a40be/operator/0.log" Mar 20 00:29:15 crc kubenswrapper[5106]: I0320 00:29:15.763875 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6b7c6d77c9-8v544_78131e28-13b9-46ee-b506-d7d79f747263/perses-operator/0.log" Mar 20 00:29:25 crc kubenswrapper[5106]: I0320 00:29:25.373028 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:29:25 crc kubenswrapper[5106]: I0320 00:29:25.373722 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.504818 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/util/0.log" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.691213 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/util/0.log" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.700122 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/pull/0.log" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.724831 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/pull/0.log" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.856059 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/util/0.log" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.880306 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/pull/0.log" Mar 20 00:29:28 crc kubenswrapper[5106]: I0320 00:29:28.891675 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fnqqt5_7b455fcd-8802-4d40-99b3-6863635bcccd/extract/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.026677 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/util/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.186528 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/util/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.207782 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/pull/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.232863 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/pull/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.406201 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/util/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.435240 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/pull/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.490663 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7acef1e4a10e04db4e216682ff91f6a23804f55f83b8dd8f8f8f5ac39ed5m9w_3184a606-fbc0-4b98-bab9-3050d0f2a6fc/extract/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.721317 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/util/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.939519 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/util/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.950604 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/pull/0.log" Mar 20 00:29:29 crc kubenswrapper[5106]: I0320 00:29:29.954174 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/pull/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.180949 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/extract/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.186287 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/util/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.214592 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtrkc_2f3907be-addc-4039-afab-aea79099b9f2/pull/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.340501 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/util/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.516213 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/util/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.517846 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/pull/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.542071 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/pull/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.697816 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/pull/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.713846 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/util/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.715255 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vsdlk_2607c5c5-17d2-449d-a4e2-679a43300ccb/extract/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.850815 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/extract-utilities/0.log" Mar 20 00:29:30 crc kubenswrapper[5106]: I0320 00:29:30.989972 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/extract-utilities/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.036136 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/extract-content/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.041557 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/extract-content/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.192894 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/extract-content/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.219528 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/extract-utilities/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.389832 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s25nb_4cfbf52b-0060-44e8-9485-e5c04de2ad60/registry-server/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.417679 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/extract-utilities/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.574861 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/extract-utilities/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.580927 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/extract-content/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.582253 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/extract-content/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.750535 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/extract-content/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.758853 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/extract-utilities/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.796128 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-ltdql_37e54f88-deec-4246-981e-cae42f1f759f/marketplace-operator/0.log" Mar 20 00:29:31 crc kubenswrapper[5106]: I0320 00:29:31.990611 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/extract-utilities/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.048913 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-znlnc_a880c089-c934-4c9e-a478-0d9d53a55c81/registry-server/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.163511 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/extract-content/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.167810 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/extract-content/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.167825 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/extract-utilities/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.424231 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/extract-content/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.531981 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/extract-utilities/0.log" Mar 20 00:29:32 crc kubenswrapper[5106]: I0320 00:29:32.782756 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6gvc7_cdd7758c-5444-4836-adae-5613bdf96c2f/registry-server/0.log" Mar 20 00:29:43 crc kubenswrapper[5106]: I0320 00:29:43.722873 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-659dbf9598-bbc28_2252d2cb-9b32-4137-8368-8b6c9bf4a267/prometheus-operator-admission-webhook/0.log" Mar 20 00:29:43 crc kubenswrapper[5106]: I0320 00:29:43.756921 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-55568fc96c-xxrkx_a5a00efc-0f62-4415-97e0-e0bfd2f1276a/prometheus-operator/0.log" Mar 20 00:29:43 crc kubenswrapper[5106]: I0320 00:29:43.786662 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-659dbf9598-p8g95_343fff88-557a-4473-b878-7badd8470e8c/prometheus-operator-admission-webhook/0.log" Mar 20 00:29:43 crc kubenswrapper[5106]: I0320 00:29:43.889459 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-587f9c8867-zp5zg_eca89ef6-a7ff-48b6-a250-3b10b73a40be/operator/0.log" Mar 20 00:29:43 crc kubenswrapper[5106]: I0320 00:29:43.919641 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-6b7c6d77c9-8v544_78131e28-13b9-46ee-b506-d7d79f747263/perses-operator/0.log" Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.373168 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.373855 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.373916 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.374885 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf7ec03b37e8c742f509fc6499c29d91dba9387492f15bc72efea2582dec2229"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.374976 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://cf7ec03b37e8c742f509fc6499c29d91dba9387492f15bc72efea2582dec2229" gracePeriod=600 Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.565752 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="cf7ec03b37e8c742f509fc6499c29d91dba9387492f15bc72efea2582dec2229" exitCode=0 Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.565815 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"cf7ec03b37e8c742f509fc6499c29d91dba9387492f15bc72efea2582dec2229"} Mar 20 00:29:55 crc kubenswrapper[5106]: I0320 00:29:55.566405 5106 scope.go:117] "RemoveContainer" containerID="6986ec753318922c38954c5594d06021e7ff8e83bd99bff58c1e865b369e05df" Mar 20 00:29:56 crc kubenswrapper[5106]: I0320 00:29:56.576924 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"46616f9eebe95046bc1d2edb3df0d47caf5a433032dac93086437bbdcf07a2b3"} Mar 20 00:29:56 crc kubenswrapper[5106]: I0320 00:29:56.595283 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-g6584/must-gather-sz5fs" podStartSLOduration=107.541108157 podStartE2EDuration="1m52.595263906s" podCreationTimestamp="2026-03-20 00:28:04 +0000 UTC" firstStartedPulling="2026-03-20 00:28:04.645661549 +0000 UTC m=+1139.079395603" lastFinishedPulling="2026-03-20 00:28:09.699817298 +0000 UTC m=+1144.133551352" observedRunningTime="2026-03-20 00:28:10.722151911 +0000 UTC m=+1145.155885965" watchObservedRunningTime="2026-03-20 00:29:56.595263906 +0000 UTC m=+1251.028997950" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.155237 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566110-8lxph"] Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.164305 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.164500 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk"] Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.170858 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566110-8lxph"] Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.171018 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.171696 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.171787 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.176357 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk"] Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.178309 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.178613 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.178797 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.203989 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/564a5ed4-70d5-43ed-967d-9084054c5b8c-config-volume\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.204031 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6hn\" (UniqueName: \"kubernetes.io/projected/5ebaba9c-d46d-4939-8153-7f69c80c3c96-kube-api-access-zv6hn\") pod \"auto-csr-approver-29566110-8lxph\" (UID: \"5ebaba9c-d46d-4939-8153-7f69c80c3c96\") " pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.204102 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh8d2\" (UniqueName: \"kubernetes.io/projected/564a5ed4-70d5-43ed-967d-9084054c5b8c-kube-api-access-xh8d2\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.204126 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/564a5ed4-70d5-43ed-967d-9084054c5b8c-secret-volume\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.305016 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/564a5ed4-70d5-43ed-967d-9084054c5b8c-config-volume\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.305064 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6hn\" (UniqueName: \"kubernetes.io/projected/5ebaba9c-d46d-4939-8153-7f69c80c3c96-kube-api-access-zv6hn\") pod \"auto-csr-approver-29566110-8lxph\" (UID: \"5ebaba9c-d46d-4939-8153-7f69c80c3c96\") " pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.305114 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xh8d2\" (UniqueName: \"kubernetes.io/projected/564a5ed4-70d5-43ed-967d-9084054c5b8c-kube-api-access-xh8d2\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.305137 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/564a5ed4-70d5-43ed-967d-9084054c5b8c-secret-volume\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.306688 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/564a5ed4-70d5-43ed-967d-9084054c5b8c-config-volume\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.326446 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/564a5ed4-70d5-43ed-967d-9084054c5b8c-secret-volume\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.330160 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh8d2\" (UniqueName: \"kubernetes.io/projected/564a5ed4-70d5-43ed-967d-9084054c5b8c-kube-api-access-xh8d2\") pod \"collect-profiles-29566110-dbbhk\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.335823 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6hn\" (UniqueName: \"kubernetes.io/projected/5ebaba9c-d46d-4939-8153-7f69c80c3c96-kube-api-access-zv6hn\") pod \"auto-csr-approver-29566110-8lxph\" (UID: \"5ebaba9c-d46d-4939-8153-7f69c80c3c96\") " pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.485666 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.497127 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.737170 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566110-8lxph"] Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.743819 5106 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 00:30:00 crc kubenswrapper[5106]: I0320 00:30:00.915076 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk"] Mar 20 00:30:00 crc kubenswrapper[5106]: W0320 00:30:00.915744 5106 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod564a5ed4_70d5_43ed_967d_9084054c5b8c.slice/crio-32f194cb45509a123a5c9af6c4a27f7e1130ef5b549ed45e191f9c675032c4c7 WatchSource:0}: Error finding container 32f194cb45509a123a5c9af6c4a27f7e1130ef5b549ed45e191f9c675032c4c7: Status 404 returned error can't find the container with id 32f194cb45509a123a5c9af6c4a27f7e1130ef5b549ed45e191f9c675032c4c7 Mar 20 00:30:01 crc kubenswrapper[5106]: I0320 00:30:01.625338 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566110-8lxph" event={"ID":"5ebaba9c-d46d-4939-8153-7f69c80c3c96","Type":"ContainerStarted","Data":"0f7d9a3ca8b4296ebb5f0978b50d33070e0b4aba85b5bb50c06e87a7c054b131"} Mar 20 00:30:01 crc kubenswrapper[5106]: I0320 00:30:01.627137 5106 generic.go:358] "Generic (PLEG): container finished" podID="564a5ed4-70d5-43ed-967d-9084054c5b8c" containerID="6015981a01b97221af81af563a8866c8429a3c79512ebb5c384e31f68645f4e6" exitCode=0 Mar 20 00:30:01 crc kubenswrapper[5106]: I0320 00:30:01.627291 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" event={"ID":"564a5ed4-70d5-43ed-967d-9084054c5b8c","Type":"ContainerDied","Data":"6015981a01b97221af81af563a8866c8429a3c79512ebb5c384e31f68645f4e6"} Mar 20 00:30:01 crc kubenswrapper[5106]: I0320 00:30:01.627314 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" event={"ID":"564a5ed4-70d5-43ed-967d-9084054c5b8c","Type":"ContainerStarted","Data":"32f194cb45509a123a5c9af6c4a27f7e1130ef5b549ed45e191f9c675032c4c7"} Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.636787 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566110-8lxph" event={"ID":"5ebaba9c-d46d-4939-8153-7f69c80c3c96","Type":"ContainerStarted","Data":"7c973482595857c280d0470eb7c64ab2d48450a8a50da94cd4ece4d7ce5afb11"} Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.651276 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566110-8lxph" podStartSLOduration=1.200467633 podStartE2EDuration="2.651259056s" podCreationTimestamp="2026-03-20 00:30:00 +0000 UTC" firstStartedPulling="2026-03-20 00:30:00.744006494 +0000 UTC m=+1255.177740548" lastFinishedPulling="2026-03-20 00:30:02.194797917 +0000 UTC m=+1256.628531971" observedRunningTime="2026-03-20 00:30:02.648513908 +0000 UTC m=+1257.082247962" watchObservedRunningTime="2026-03-20 00:30:02.651259056 +0000 UTC m=+1257.084993110" Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.901040 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.945608 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh8d2\" (UniqueName: \"kubernetes.io/projected/564a5ed4-70d5-43ed-967d-9084054c5b8c-kube-api-access-xh8d2\") pod \"564a5ed4-70d5-43ed-967d-9084054c5b8c\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.945730 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/564a5ed4-70d5-43ed-967d-9084054c5b8c-config-volume\") pod \"564a5ed4-70d5-43ed-967d-9084054c5b8c\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.945769 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/564a5ed4-70d5-43ed-967d-9084054c5b8c-secret-volume\") pod \"564a5ed4-70d5-43ed-967d-9084054c5b8c\" (UID: \"564a5ed4-70d5-43ed-967d-9084054c5b8c\") " Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.947630 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/564a5ed4-70d5-43ed-967d-9084054c5b8c-config-volume" (OuterVolumeSpecName: "config-volume") pod "564a5ed4-70d5-43ed-967d-9084054c5b8c" (UID: "564a5ed4-70d5-43ed-967d-9084054c5b8c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.952178 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564a5ed4-70d5-43ed-967d-9084054c5b8c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "564a5ed4-70d5-43ed-967d-9084054c5b8c" (UID: "564a5ed4-70d5-43ed-967d-9084054c5b8c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 20 00:30:02 crc kubenswrapper[5106]: I0320 00:30:02.959448 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564a5ed4-70d5-43ed-967d-9084054c5b8c-kube-api-access-xh8d2" (OuterVolumeSpecName: "kube-api-access-xh8d2") pod "564a5ed4-70d5-43ed-967d-9084054c5b8c" (UID: "564a5ed4-70d5-43ed-967d-9084054c5b8c"). InnerVolumeSpecName "kube-api-access-xh8d2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.051000 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xh8d2\" (UniqueName: \"kubernetes.io/projected/564a5ed4-70d5-43ed-967d-9084054c5b8c-kube-api-access-xh8d2\") on node \"crc\" DevicePath \"\"" Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.051052 5106 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/564a5ed4-70d5-43ed-967d-9084054c5b8c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.051063 5106 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/564a5ed4-70d5-43ed-967d-9084054c5b8c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.645817 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.645826 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566110-dbbhk" event={"ID":"564a5ed4-70d5-43ed-967d-9084054c5b8c","Type":"ContainerDied","Data":"32f194cb45509a123a5c9af6c4a27f7e1130ef5b549ed45e191f9c675032c4c7"} Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.646292 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f194cb45509a123a5c9af6c4a27f7e1130ef5b549ed45e191f9c675032c4c7" Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.653665 5106 generic.go:358] "Generic (PLEG): container finished" podID="5ebaba9c-d46d-4939-8153-7f69c80c3c96" containerID="7c973482595857c280d0470eb7c64ab2d48450a8a50da94cd4ece4d7ce5afb11" exitCode=0 Mar 20 00:30:03 crc kubenswrapper[5106]: I0320 00:30:03.653763 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566110-8lxph" event={"ID":"5ebaba9c-d46d-4939-8153-7f69c80c3c96","Type":"ContainerDied","Data":"7c973482595857c280d0470eb7c64ab2d48450a8a50da94cd4ece4d7ce5afb11"} Mar 20 00:30:04 crc kubenswrapper[5106]: I0320 00:30:04.924041 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:04 crc kubenswrapper[5106]: I0320 00:30:04.981007 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv6hn\" (UniqueName: \"kubernetes.io/projected/5ebaba9c-d46d-4939-8153-7f69c80c3c96-kube-api-access-zv6hn\") pod \"5ebaba9c-d46d-4939-8153-7f69c80c3c96\" (UID: \"5ebaba9c-d46d-4939-8153-7f69c80c3c96\") " Mar 20 00:30:04 crc kubenswrapper[5106]: I0320 00:30:04.989353 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebaba9c-d46d-4939-8153-7f69c80c3c96-kube-api-access-zv6hn" (OuterVolumeSpecName: "kube-api-access-zv6hn") pod "5ebaba9c-d46d-4939-8153-7f69c80c3c96" (UID: "5ebaba9c-d46d-4939-8153-7f69c80c3c96"). InnerVolumeSpecName "kube-api-access-zv6hn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:30:05 crc kubenswrapper[5106]: I0320 00:30:05.083657 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zv6hn\" (UniqueName: \"kubernetes.io/projected/5ebaba9c-d46d-4939-8153-7f69c80c3c96-kube-api-access-zv6hn\") on node \"crc\" DevicePath \"\"" Mar 20 00:30:05 crc kubenswrapper[5106]: I0320 00:30:05.683893 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566110-8lxph" event={"ID":"5ebaba9c-d46d-4939-8153-7f69c80c3c96","Type":"ContainerDied","Data":"0f7d9a3ca8b4296ebb5f0978b50d33070e0b4aba85b5bb50c06e87a7c054b131"} Mar 20 00:30:05 crc kubenswrapper[5106]: I0320 00:30:05.683950 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566110-8lxph" Mar 20 00:30:05 crc kubenswrapper[5106]: I0320 00:30:05.684003 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f7d9a3ca8b4296ebb5f0978b50d33070e0b4aba85b5bb50c06e87a7c054b131" Mar 20 00:30:05 crc kubenswrapper[5106]: I0320 00:30:05.712022 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566104-7kwwp"] Mar 20 00:30:05 crc kubenswrapper[5106]: I0320 00:30:05.727702 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566104-7kwwp"] Mar 20 00:30:07 crc kubenswrapper[5106]: I0320 00:30:07.175238 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b656fa81-2c43-4fa0-a4af-7f8fe391cc0c" path="/var/lib/kubelet/pods/b656fa81-2c43-4fa0-a4af-7f8fe391cc0c/volumes" Mar 20 00:30:23 crc kubenswrapper[5106]: I0320 00:30:23.181345 5106 scope.go:117] "RemoveContainer" containerID="d512cc822b3bc239e7bdf571d2bf6f3c3909a88bc9416ccbd18400a5b62e194d" Mar 20 00:30:28 crc kubenswrapper[5106]: I0320 00:30:28.915513 5106 generic.go:358] "Generic (PLEG): container finished" podID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerID="fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9" exitCode=0 Mar 20 00:30:28 crc kubenswrapper[5106]: I0320 00:30:28.915614 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-g6584/must-gather-sz5fs" event={"ID":"7846f3f3-7b74-42ae-a08f-67b54cd3c91f","Type":"ContainerDied","Data":"fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9"} Mar 20 00:30:28 crc kubenswrapper[5106]: I0320 00:30:28.916821 5106 scope.go:117] "RemoveContainer" containerID="fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9" Mar 20 00:30:29 crc kubenswrapper[5106]: I0320 00:30:29.336405 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g6584_must-gather-sz5fs_7846f3f3-7b74-42ae-a08f-67b54cd3c91f/gather/0.log" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.588244 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-g6584/must-gather-sz5fs"] Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.589038 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-g6584/must-gather-sz5fs" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="copy" containerID="cri-o://bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35" gracePeriod=2 Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.592784 5106 status_manager.go:895] "Failed to get status for pod" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" pod="openshift-must-gather-g6584/must-gather-sz5fs" err="pods \"must-gather-sz5fs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g6584\": no relationship found between node 'crc' and this object" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.593627 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-g6584/must-gather-sz5fs"] Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.965800 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g6584_must-gather-sz5fs_7846f3f3-7b74-42ae-a08f-67b54cd3c91f/copy/0.log" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.966646 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.968644 5106 status_manager.go:895] "Failed to get status for pod" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" pod="openshift-must-gather-g6584/must-gather-sz5fs" err="pods \"must-gather-sz5fs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g6584\": no relationship found between node 'crc' and this object" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.976496 5106 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-g6584_must-gather-sz5fs_7846f3f3-7b74-42ae-a08f-67b54cd3c91f/copy/0.log" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.976844 5106 generic.go:358] "Generic (PLEG): container finished" podID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerID="bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35" exitCode=143 Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.976897 5106 scope.go:117] "RemoveContainer" containerID="bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.977024 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-g6584/must-gather-sz5fs" Mar 20 00:30:35 crc kubenswrapper[5106]: I0320 00:30:35.978244 5106 status_manager.go:895] "Failed to get status for pod" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" pod="openshift-must-gather-g6584/must-gather-sz5fs" err="pods \"must-gather-sz5fs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g6584\": no relationship found between node 'crc' and this object" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.014655 5106 scope.go:117] "RemoveContainer" containerID="fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.101932 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd2g5\" (UniqueName: \"kubernetes.io/projected/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-kube-api-access-dd2g5\") pod \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.102095 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-must-gather-output\") pod \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\" (UID: \"7846f3f3-7b74-42ae-a08f-67b54cd3c91f\") " Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.105334 5106 scope.go:117] "RemoveContainer" containerID="bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35" Mar 20 00:30:36 crc kubenswrapper[5106]: E0320 00:30:36.107406 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35\": container with ID starting with bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35 not found: ID does not exist" containerID="bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.107445 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35"} err="failed to get container status \"bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35\": rpc error: code = NotFound desc = could not find container \"bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35\": container with ID starting with bf43a6df36c2aee5fca29b4e525fbd96ed4b79057f5d3f7e4dd1775e3c3c6d35 not found: ID does not exist" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.107465 5106 scope.go:117] "RemoveContainer" containerID="fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9" Mar 20 00:30:36 crc kubenswrapper[5106]: E0320 00:30:36.107942 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9\": container with ID starting with fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9 not found: ID does not exist" containerID="fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.107990 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9"} err="failed to get container status \"fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9\": rpc error: code = NotFound desc = could not find container \"fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9\": container with ID starting with fd9684f10b0c64284c4bdb14fba926173b3f9b5cb4f02e4db2b737858975acb9 not found: ID does not exist" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.109427 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-kube-api-access-dd2g5" (OuterVolumeSpecName: "kube-api-access-dd2g5") pod "7846f3f3-7b74-42ae-a08f-67b54cd3c91f" (UID: "7846f3f3-7b74-42ae-a08f-67b54cd3c91f"). InnerVolumeSpecName "kube-api-access-dd2g5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.177427 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7846f3f3-7b74-42ae-a08f-67b54cd3c91f" (UID: "7846f3f3-7b74-42ae-a08f-67b54cd3c91f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.204670 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dd2g5\" (UniqueName: \"kubernetes.io/projected/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-kube-api-access-dd2g5\") on node \"crc\" DevicePath \"\"" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.204699 5106 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7846f3f3-7b74-42ae-a08f-67b54cd3c91f-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 20 00:30:36 crc kubenswrapper[5106]: I0320 00:30:36.295637 5106 status_manager.go:895] "Failed to get status for pod" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" pod="openshift-must-gather-g6584/must-gather-sz5fs" err="pods \"must-gather-sz5fs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g6584\": no relationship found between node 'crc' and this object" Mar 20 00:30:37 crc kubenswrapper[5106]: I0320 00:30:37.170138 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" path="/var/lib/kubelet/pods/7846f3f3-7b74-42ae-a08f-67b54cd3c91f/volumes" Mar 20 00:30:37 crc kubenswrapper[5106]: I0320 00:30:37.170622 5106 status_manager.go:895] "Failed to get status for pod" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" pod="openshift-must-gather-g6584/must-gather-sz5fs" err="pods \"must-gather-sz5fs\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-g6584\": no relationship found between node 'crc' and this object" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.361340 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cg5d2"] Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.365655 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="gather" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.365901 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="gather" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366236 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ebaba9c-d46d-4939-8153-7f69c80c3c96" containerName="oc" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366300 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ebaba9c-d46d-4939-8153-7f69c80c3c96" containerName="oc" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366340 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="copy" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366349 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="copy" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366360 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="564a5ed4-70d5-43ed-967d-9084054c5b8c" containerName="collect-profiles" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366368 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="564a5ed4-70d5-43ed-967d-9084054c5b8c" containerName="collect-profiles" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366722 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="gather" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366736 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ebaba9c-d46d-4939-8153-7f69c80c3c96" containerName="oc" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366748 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="7846f3f3-7b74-42ae-a08f-67b54cd3c91f" containerName="copy" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.366760 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="564a5ed4-70d5-43ed-967d-9084054c5b8c" containerName="collect-profiles" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.373335 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.380680 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cg5d2"] Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.451576 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-catalog-content\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.451661 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hjh6\" (UniqueName: \"kubernetes.io/projected/86c1cfd8-fc51-4887-b577-db6c68dcf65f-kube-api-access-9hjh6\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.451738 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-utilities\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.553639 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hjh6\" (UniqueName: \"kubernetes.io/projected/86c1cfd8-fc51-4887-b577-db6c68dcf65f-kube-api-access-9hjh6\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.553747 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-utilities\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.553867 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-catalog-content\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.554553 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-catalog-content\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.554974 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-utilities\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.576473 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hjh6\" (UniqueName: \"kubernetes.io/projected/86c1cfd8-fc51-4887-b577-db6c68dcf65f-kube-api-access-9hjh6\") pod \"redhat-operators-cg5d2\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:42 crc kubenswrapper[5106]: I0320 00:31:42.698904 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:43 crc kubenswrapper[5106]: I0320 00:31:43.117277 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cg5d2"] Mar 20 00:31:43 crc kubenswrapper[5106]: I0320 00:31:43.527042 5106 generic.go:358] "Generic (PLEG): container finished" podID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerID="a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286" exitCode=0 Mar 20 00:31:43 crc kubenswrapper[5106]: I0320 00:31:43.527145 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg5d2" event={"ID":"86c1cfd8-fc51-4887-b577-db6c68dcf65f","Type":"ContainerDied","Data":"a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286"} Mar 20 00:31:43 crc kubenswrapper[5106]: I0320 00:31:43.527205 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg5d2" event={"ID":"86c1cfd8-fc51-4887-b577-db6c68dcf65f","Type":"ContainerStarted","Data":"10aeb679697a1d38b107c2707ed0ddd0ffbed7acd8d34cb7ddcb91b187324dab"} Mar 20 00:31:45 crc kubenswrapper[5106]: E0320 00:31:45.234493 5106 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86c1cfd8_fc51_4887_b577_db6c68dcf65f.slice/crio-4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86c1cfd8_fc51_4887_b577_db6c68dcf65f.slice/crio-conmon-4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1.scope\": RecentStats: unable to find data in memory cache]" Mar 20 00:31:45 crc kubenswrapper[5106]: I0320 00:31:45.548059 5106 generic.go:358] "Generic (PLEG): container finished" podID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerID="4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1" exitCode=0 Mar 20 00:31:45 crc kubenswrapper[5106]: I0320 00:31:45.548212 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg5d2" event={"ID":"86c1cfd8-fc51-4887-b577-db6c68dcf65f","Type":"ContainerDied","Data":"4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1"} Mar 20 00:31:46 crc kubenswrapper[5106]: I0320 00:31:46.559492 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg5d2" event={"ID":"86c1cfd8-fc51-4887-b577-db6c68dcf65f","Type":"ContainerStarted","Data":"d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff"} Mar 20 00:31:46 crc kubenswrapper[5106]: I0320 00:31:46.578140 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cg5d2" podStartSLOduration=3.738806902 podStartE2EDuration="4.578121581s" podCreationTimestamp="2026-03-20 00:31:42 +0000 UTC" firstStartedPulling="2026-03-20 00:31:43.528036453 +0000 UTC m=+1357.961770507" lastFinishedPulling="2026-03-20 00:31:44.367351122 +0000 UTC m=+1358.801085186" observedRunningTime="2026-03-20 00:31:46.576647774 +0000 UTC m=+1361.010381838" watchObservedRunningTime="2026-03-20 00:31:46.578121581 +0000 UTC m=+1361.011855625" Mar 20 00:31:52 crc kubenswrapper[5106]: I0320 00:31:52.699389 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:52 crc kubenswrapper[5106]: I0320 00:31:52.700763 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:52 crc kubenswrapper[5106]: I0320 00:31:52.754531 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:53 crc kubenswrapper[5106]: I0320 00:31:53.654403 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:53 crc kubenswrapper[5106]: I0320 00:31:53.704527 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cg5d2"] Mar 20 00:31:55 crc kubenswrapper[5106]: I0320 00:31:55.373099 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:31:55 crc kubenswrapper[5106]: I0320 00:31:55.373184 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:31:55 crc kubenswrapper[5106]: I0320 00:31:55.627807 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cg5d2" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="registry-server" containerID="cri-o://d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff" gracePeriod=2 Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.027330 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.167295 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hjh6\" (UniqueName: \"kubernetes.io/projected/86c1cfd8-fc51-4887-b577-db6c68dcf65f-kube-api-access-9hjh6\") pod \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.167717 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-catalog-content\") pod \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.167971 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-utilities\") pod \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\" (UID: \"86c1cfd8-fc51-4887-b577-db6c68dcf65f\") " Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.168838 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-utilities" (OuterVolumeSpecName: "utilities") pod "86c1cfd8-fc51-4887-b577-db6c68dcf65f" (UID: "86c1cfd8-fc51-4887-b577-db6c68dcf65f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.174389 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c1cfd8-fc51-4887-b577-db6c68dcf65f-kube-api-access-9hjh6" (OuterVolumeSpecName: "kube-api-access-9hjh6") pod "86c1cfd8-fc51-4887-b577-db6c68dcf65f" (UID: "86c1cfd8-fc51-4887-b577-db6c68dcf65f"). InnerVolumeSpecName "kube-api-access-9hjh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.270171 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hjh6\" (UniqueName: \"kubernetes.io/projected/86c1cfd8-fc51-4887-b577-db6c68dcf65f-kube-api-access-9hjh6\") on node \"crc\" DevicePath \"\"" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.270563 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.287122 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86c1cfd8-fc51-4887-b577-db6c68dcf65f" (UID: "86c1cfd8-fc51-4887-b577-db6c68dcf65f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.371989 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86c1cfd8-fc51-4887-b577-db6c68dcf65f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.636347 5106 generic.go:358] "Generic (PLEG): container finished" podID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerID="d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff" exitCode=0 Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.636478 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg5d2" event={"ID":"86c1cfd8-fc51-4887-b577-db6c68dcf65f","Type":"ContainerDied","Data":"d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff"} Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.636524 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cg5d2" event={"ID":"86c1cfd8-fc51-4887-b577-db6c68dcf65f","Type":"ContainerDied","Data":"10aeb679697a1d38b107c2707ed0ddd0ffbed7acd8d34cb7ddcb91b187324dab"} Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.636558 5106 scope.go:117] "RemoveContainer" containerID="d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.637781 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cg5d2" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.655102 5106 scope.go:117] "RemoveContainer" containerID="4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.676849 5106 scope.go:117] "RemoveContainer" containerID="a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.678293 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cg5d2"] Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.733519 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cg5d2"] Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.741428 5106 scope.go:117] "RemoveContainer" containerID="d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff" Mar 20 00:31:56 crc kubenswrapper[5106]: E0320 00:31:56.741936 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff\": container with ID starting with d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff not found: ID does not exist" containerID="d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.741991 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff"} err="failed to get container status \"d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff\": rpc error: code = NotFound desc = could not find container \"d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff\": container with ID starting with d48918bc2d7148837d6ed39290bb5be998bd22f7f9c33149515590ef21ad19ff not found: ID does not exist" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.742024 5106 scope.go:117] "RemoveContainer" containerID="4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1" Mar 20 00:31:56 crc kubenswrapper[5106]: E0320 00:31:56.742425 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1\": container with ID starting with 4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1 not found: ID does not exist" containerID="4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.742484 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1"} err="failed to get container status \"4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1\": rpc error: code = NotFound desc = could not find container \"4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1\": container with ID starting with 4b86dfbde102439e73ef46bf2955649d339fa6c02c4acc65dba6e74d90ab88d1 not found: ID does not exist" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.742524 5106 scope.go:117] "RemoveContainer" containerID="a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286" Mar 20 00:31:56 crc kubenswrapper[5106]: E0320 00:31:56.742888 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286\": container with ID starting with a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286 not found: ID does not exist" containerID="a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286" Mar 20 00:31:56 crc kubenswrapper[5106]: I0320 00:31:56.742929 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286"} err="failed to get container status \"a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286\": rpc error: code = NotFound desc = could not find container \"a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286\": container with ID starting with a459bde6c3632ed3f156e113d8709d383f0c7e7bc86b2b84d49c00195e678286 not found: ID does not exist" Mar 20 00:31:57 crc kubenswrapper[5106]: I0320 00:31:57.184945 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" path="/var/lib/kubelet/pods/86c1cfd8-fc51-4887-b577-db6c68dcf65f/volumes" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.145081 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566112-r6jpv"] Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.147839 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="extract-utilities" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.148038 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="extract-utilities" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.148141 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="extract-content" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.148244 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="extract-content" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.148364 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="registry-server" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.148484 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="registry-server" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.148841 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="86c1cfd8-fc51-4887-b577-db6c68dcf65f" containerName="registry-server" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.154947 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566112-r6jpv"] Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.155132 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.157662 5106 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5fjw8\"" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.157667 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.158889 5106 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.231831 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st2x5\" (UniqueName: \"kubernetes.io/projected/a11f82e8-a059-4e2d-8b42-fc6a8bcd7054-kube-api-access-st2x5\") pod \"auto-csr-approver-29566112-r6jpv\" (UID: \"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054\") " pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.333901 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-st2x5\" (UniqueName: \"kubernetes.io/projected/a11f82e8-a059-4e2d-8b42-fc6a8bcd7054-kube-api-access-st2x5\") pod \"auto-csr-approver-29566112-r6jpv\" (UID: \"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054\") " pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.351553 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-st2x5\" (UniqueName: \"kubernetes.io/projected/a11f82e8-a059-4e2d-8b42-fc6a8bcd7054-kube-api-access-st2x5\") pod \"auto-csr-approver-29566112-r6jpv\" (UID: \"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054\") " pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.483896 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:00 crc kubenswrapper[5106]: I0320 00:32:00.682117 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566112-r6jpv"] Mar 20 00:32:01 crc kubenswrapper[5106]: I0320 00:32:01.683123 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" event={"ID":"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054","Type":"ContainerStarted","Data":"dc185de299fb882110b116843a74c9abad5458df646f27723fb22f1baca8481a"} Mar 20 00:32:02 crc kubenswrapper[5106]: I0320 00:32:02.691799 5106 generic.go:358] "Generic (PLEG): container finished" podID="a11f82e8-a059-4e2d-8b42-fc6a8bcd7054" containerID="a7301afded7946fb2b22f4f8ab6684a306e62729410ffbd9bca3e345700e6af5" exitCode=0 Mar 20 00:32:02 crc kubenswrapper[5106]: I0320 00:32:02.691881 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" event={"ID":"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054","Type":"ContainerDied","Data":"a7301afded7946fb2b22f4f8ab6684a306e62729410ffbd9bca3e345700e6af5"} Mar 20 00:32:03 crc kubenswrapper[5106]: I0320 00:32:03.917680 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:03 crc kubenswrapper[5106]: I0320 00:32:03.993212 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st2x5\" (UniqueName: \"kubernetes.io/projected/a11f82e8-a059-4e2d-8b42-fc6a8bcd7054-kube-api-access-st2x5\") pod \"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054\" (UID: \"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054\") " Mar 20 00:32:03 crc kubenswrapper[5106]: I0320 00:32:03.999026 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11f82e8-a059-4e2d-8b42-fc6a8bcd7054-kube-api-access-st2x5" (OuterVolumeSpecName: "kube-api-access-st2x5") pod "a11f82e8-a059-4e2d-8b42-fc6a8bcd7054" (UID: "a11f82e8-a059-4e2d-8b42-fc6a8bcd7054"). InnerVolumeSpecName "kube-api-access-st2x5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:32:04 crc kubenswrapper[5106]: I0320 00:32:04.095115 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-st2x5\" (UniqueName: \"kubernetes.io/projected/a11f82e8-a059-4e2d-8b42-fc6a8bcd7054-kube-api-access-st2x5\") on node \"crc\" DevicePath \"\"" Mar 20 00:32:04 crc kubenswrapper[5106]: I0320 00:32:04.707770 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" event={"ID":"a11f82e8-a059-4e2d-8b42-fc6a8bcd7054","Type":"ContainerDied","Data":"dc185de299fb882110b116843a74c9abad5458df646f27723fb22f1baca8481a"} Mar 20 00:32:04 crc kubenswrapper[5106]: I0320 00:32:04.707833 5106 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc185de299fb882110b116843a74c9abad5458df646f27723fb22f1baca8481a" Mar 20 00:32:04 crc kubenswrapper[5106]: I0320 00:32:04.707782 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566112-r6jpv" Mar 20 00:32:04 crc kubenswrapper[5106]: I0320 00:32:04.985957 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566106-6ld88"] Mar 20 00:32:04 crc kubenswrapper[5106]: I0320 00:32:04.996228 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566106-6ld88"] Mar 20 00:32:05 crc kubenswrapper[5106]: I0320 00:32:05.169002 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fce4f3c7-8ca1-463a-b63b-63d0a5d5af90" path="/var/lib/kubelet/pods/fce4f3c7-8ca1-463a-b63b-63d0a5d5af90/volumes" Mar 20 00:32:09 crc kubenswrapper[5106]: I0320 00:32:09.935048 5106 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-526s4"] Mar 20 00:32:09 crc kubenswrapper[5106]: I0320 00:32:09.936325 5106 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a11f82e8-a059-4e2d-8b42-fc6a8bcd7054" containerName="oc" Mar 20 00:32:09 crc kubenswrapper[5106]: I0320 00:32:09.936343 5106 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11f82e8-a059-4e2d-8b42-fc6a8bcd7054" containerName="oc" Mar 20 00:32:09 crc kubenswrapper[5106]: I0320 00:32:09.936539 5106 memory_manager.go:356] "RemoveStaleState removing state" podUID="a11f82e8-a059-4e2d-8b42-fc6a8bcd7054" containerName="oc" Mar 20 00:32:09 crc kubenswrapper[5106]: I0320 00:32:09.945973 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:09 crc kubenswrapper[5106]: I0320 00:32:09.951710 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-526s4"] Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.085255 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-catalog-content\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.085315 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-utilities\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.085665 5106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmth\" (UniqueName: \"kubernetes.io/projected/76dd727d-4ec9-434c-9ce6-4256a3962284-kube-api-access-6tmth\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.187648 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-catalog-content\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.187698 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-utilities\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.187750 5106 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmth\" (UniqueName: \"kubernetes.io/projected/76dd727d-4ec9-434c-9ce6-4256a3962284-kube-api-access-6tmth\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.188335 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-catalog-content\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.188373 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-utilities\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.210370 5106 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmth\" (UniqueName: \"kubernetes.io/projected/76dd727d-4ec9-434c-9ce6-4256a3962284-kube-api-access-6tmth\") pod \"certified-operators-526s4\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.266688 5106 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:10 crc kubenswrapper[5106]: I0320 00:32:10.773841 5106 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-526s4"] Mar 20 00:32:11 crc kubenswrapper[5106]: I0320 00:32:11.765735 5106 generic.go:358] "Generic (PLEG): container finished" podID="76dd727d-4ec9-434c-9ce6-4256a3962284" containerID="c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632" exitCode=0 Mar 20 00:32:11 crc kubenswrapper[5106]: I0320 00:32:11.765830 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-526s4" event={"ID":"76dd727d-4ec9-434c-9ce6-4256a3962284","Type":"ContainerDied","Data":"c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632"} Mar 20 00:32:11 crc kubenswrapper[5106]: I0320 00:32:11.766009 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-526s4" event={"ID":"76dd727d-4ec9-434c-9ce6-4256a3962284","Type":"ContainerStarted","Data":"7dc7100484879aedb2b3895908939662f5eef56aea05cad4c47142830b7612ce"} Mar 20 00:32:13 crc kubenswrapper[5106]: I0320 00:32:13.785013 5106 generic.go:358] "Generic (PLEG): container finished" podID="76dd727d-4ec9-434c-9ce6-4256a3962284" containerID="bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35" exitCode=0 Mar 20 00:32:13 crc kubenswrapper[5106]: I0320 00:32:13.785097 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-526s4" event={"ID":"76dd727d-4ec9-434c-9ce6-4256a3962284","Type":"ContainerDied","Data":"bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35"} Mar 20 00:32:14 crc kubenswrapper[5106]: I0320 00:32:14.795225 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-526s4" event={"ID":"76dd727d-4ec9-434c-9ce6-4256a3962284","Type":"ContainerStarted","Data":"ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2"} Mar 20 00:32:14 crc kubenswrapper[5106]: I0320 00:32:14.813516 5106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-526s4" podStartSLOduration=4.858915634 podStartE2EDuration="5.813497014s" podCreationTimestamp="2026-03-20 00:32:09 +0000 UTC" firstStartedPulling="2026-03-20 00:32:11.766714379 +0000 UTC m=+1386.200448433" lastFinishedPulling="2026-03-20 00:32:12.721295759 +0000 UTC m=+1387.155029813" observedRunningTime="2026-03-20 00:32:14.809818582 +0000 UTC m=+1389.243552646" watchObservedRunningTime="2026-03-20 00:32:14.813497014 +0000 UTC m=+1389.247231078" Mar 20 00:32:20 crc kubenswrapper[5106]: I0320 00:32:20.268244 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:20 crc kubenswrapper[5106]: I0320 00:32:20.268898 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:20 crc kubenswrapper[5106]: I0320 00:32:20.314435 5106 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:20 crc kubenswrapper[5106]: I0320 00:32:20.910570 5106 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:20 crc kubenswrapper[5106]: I0320 00:32:20.957963 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-526s4"] Mar 20 00:32:22 crc kubenswrapper[5106]: I0320 00:32:22.875838 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-526s4" podUID="76dd727d-4ec9-434c-9ce6-4256a3962284" containerName="registry-server" containerID="cri-o://ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2" gracePeriod=2 Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.321094 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.340227 5106 scope.go:117] "RemoveContainer" containerID="3c65fc18b6c4cf5fe0694a1228003db5194716f8e8ada8a99070caf8f3e436c4" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.406780 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmth\" (UniqueName: \"kubernetes.io/projected/76dd727d-4ec9-434c-9ce6-4256a3962284-kube-api-access-6tmth\") pod \"76dd727d-4ec9-434c-9ce6-4256a3962284\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.406940 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-catalog-content\") pod \"76dd727d-4ec9-434c-9ce6-4256a3962284\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.406964 5106 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-utilities\") pod \"76dd727d-4ec9-434c-9ce6-4256a3962284\" (UID: \"76dd727d-4ec9-434c-9ce6-4256a3962284\") " Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.409312 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-utilities" (OuterVolumeSpecName: "utilities") pod "76dd727d-4ec9-434c-9ce6-4256a3962284" (UID: "76dd727d-4ec9-434c-9ce6-4256a3962284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.421347 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76dd727d-4ec9-434c-9ce6-4256a3962284-kube-api-access-6tmth" (OuterVolumeSpecName: "kube-api-access-6tmth") pod "76dd727d-4ec9-434c-9ce6-4256a3962284" (UID: "76dd727d-4ec9-434c-9ce6-4256a3962284"). InnerVolumeSpecName "kube-api-access-6tmth". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.452689 5106 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76dd727d-4ec9-434c-9ce6-4256a3962284" (UID: "76dd727d-4ec9-434c-9ce6-4256a3962284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.510406 5106 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6tmth\" (UniqueName: \"kubernetes.io/projected/76dd727d-4ec9-434c-9ce6-4256a3962284-kube-api-access-6tmth\") on node \"crc\" DevicePath \"\"" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.510459 5106 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.510475 5106 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76dd727d-4ec9-434c-9ce6-4256a3962284-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.904402 5106 generic.go:358] "Generic (PLEG): container finished" podID="76dd727d-4ec9-434c-9ce6-4256a3962284" containerID="ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2" exitCode=0 Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.904491 5106 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-526s4" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.904488 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-526s4" event={"ID":"76dd727d-4ec9-434c-9ce6-4256a3962284","Type":"ContainerDied","Data":"ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2"} Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.904618 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-526s4" event={"ID":"76dd727d-4ec9-434c-9ce6-4256a3962284","Type":"ContainerDied","Data":"7dc7100484879aedb2b3895908939662f5eef56aea05cad4c47142830b7612ce"} Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.904644 5106 scope.go:117] "RemoveContainer" containerID="ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.922607 5106 scope.go:117] "RemoveContainer" containerID="bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.959242 5106 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-526s4"] Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.964554 5106 scope.go:117] "RemoveContainer" containerID="c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632" Mar 20 00:32:23 crc kubenswrapper[5106]: I0320 00:32:23.971119 5106 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-526s4"] Mar 20 00:32:24 crc kubenswrapper[5106]: I0320 00:32:24.000741 5106 scope.go:117] "RemoveContainer" containerID="ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2" Mar 20 00:32:24 crc kubenswrapper[5106]: E0320 00:32:24.001309 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2\": container with ID starting with ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2 not found: ID does not exist" containerID="ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2" Mar 20 00:32:24 crc kubenswrapper[5106]: I0320 00:32:24.001358 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2"} err="failed to get container status \"ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2\": rpc error: code = NotFound desc = could not find container \"ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2\": container with ID starting with ac1d07def0dfe7436fdbda54f46e4723957f740d732348b4ea9f15b695d6f0e2 not found: ID does not exist" Mar 20 00:32:24 crc kubenswrapper[5106]: I0320 00:32:24.001383 5106 scope.go:117] "RemoveContainer" containerID="bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35" Mar 20 00:32:24 crc kubenswrapper[5106]: E0320 00:32:24.001727 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35\": container with ID starting with bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35 not found: ID does not exist" containerID="bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35" Mar 20 00:32:24 crc kubenswrapper[5106]: I0320 00:32:24.001793 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35"} err="failed to get container status \"bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35\": rpc error: code = NotFound desc = could not find container \"bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35\": container with ID starting with bef4fe82589ab03a0150ef9c4c7b14e216eb5859681dfbb8e6f24ddb1b3e5c35 not found: ID does not exist" Mar 20 00:32:24 crc kubenswrapper[5106]: I0320 00:32:24.001823 5106 scope.go:117] "RemoveContainer" containerID="c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632" Mar 20 00:32:24 crc kubenswrapper[5106]: E0320 00:32:24.002086 5106 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632\": container with ID starting with c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632 not found: ID does not exist" containerID="c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632" Mar 20 00:32:24 crc kubenswrapper[5106]: I0320 00:32:24.002108 5106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632"} err="failed to get container status \"c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632\": rpc error: code = NotFound desc = could not find container \"c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632\": container with ID starting with c6dda6af355de8e874324d02cf63c196bb536a6512d4ea2dd3020a7f6dba1632 not found: ID does not exist" Mar 20 00:32:25 crc kubenswrapper[5106]: I0320 00:32:25.171663 5106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76dd727d-4ec9-434c-9ce6-4256a3962284" path="/var/lib/kubelet/pods/76dd727d-4ec9-434c-9ce6-4256a3962284/volumes" Mar 20 00:32:25 crc kubenswrapper[5106]: I0320 00:32:25.374232 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:32:25 crc kubenswrapper[5106]: I0320 00:32:25.374389 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:32:55 crc kubenswrapper[5106]: I0320 00:32:55.386341 5106 patch_prober.go:28] interesting pod/machine-config-daemon-769dn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 00:32:55 crc kubenswrapper[5106]: I0320 00:32:55.387119 5106 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 00:32:55 crc kubenswrapper[5106]: I0320 00:32:55.387203 5106 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-769dn" Mar 20 00:32:55 crc kubenswrapper[5106]: I0320 00:32:55.388387 5106 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46616f9eebe95046bc1d2edb3df0d47caf5a433032dac93086437bbdcf07a2b3"} pod="openshift-machine-config-operator/machine-config-daemon-769dn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 00:32:55 crc kubenswrapper[5106]: I0320 00:32:55.388462 5106 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-769dn" podUID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerName="machine-config-daemon" containerID="cri-o://46616f9eebe95046bc1d2edb3df0d47caf5a433032dac93086437bbdcf07a2b3" gracePeriod=600 Mar 20 00:32:56 crc kubenswrapper[5106]: I0320 00:32:56.166775 5106 generic.go:358] "Generic (PLEG): container finished" podID="9a6c6201-eadf-497e-921b-e5fcec3ccddb" containerID="46616f9eebe95046bc1d2edb3df0d47caf5a433032dac93086437bbdcf07a2b3" exitCode=0 Mar 20 00:32:56 crc kubenswrapper[5106]: I0320 00:32:56.167043 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerDied","Data":"46616f9eebe95046bc1d2edb3df0d47caf5a433032dac93086437bbdcf07a2b3"} Mar 20 00:32:56 crc kubenswrapper[5106]: I0320 00:32:56.167652 5106 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-769dn" event={"ID":"9a6c6201-eadf-497e-921b-e5fcec3ccddb","Type":"ContainerStarted","Data":"55835991a9861cf4121d7818db46bb3335ba14cb0fb222d10936dd745e8016a3"} Mar 20 00:32:56 crc kubenswrapper[5106]: I0320 00:32:56.167703 5106 scope.go:117] "RemoveContainer" containerID="cf7ec03b37e8c742f509fc6499c29d91dba9387492f15bc72efea2582dec2229" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515157112762024454 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015157112762017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015157107411016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015157107411015457 5ustar corecore